Intel launches Sapphire Rapids and HPC-optimised Max series
News|by Leanne Bevan|25 January 2023
Intel marked one of the most important product launches in company history with the unveiling of its highly anticipated CPU and GPU architectures:
- 4th Gen Intel® Xeon® Scalable processors (code-named Sapphire Rapids)
- Intel® Xeon® CPU Max Series (code-named Sapphire Rapids HBM)
- Intel® Data Center GPU Max Series (code-named Ponte Vecchio)
These feature-rich product families bring scalable, balanced architectures that integrate CPU and GPU with the oneAPI open software ecosystem, delivering a leap in data center performance, efficiency, security, and new capabilities for AI, the cloud, the network, and exascale.
Intel Xeon and oneAPI tools
4th Gen Intel Xeon & Intel Max Series (CPU) processors
These provide a range of features for managing power and performance at high efficiency, including these instruction sets and built-in accelerators: Intel Advanced Matrix Extensions, Intel QuickAssist Technology, Intel Data Streaming Accelerator, and Intel In-Memory Analytics Accelerator.
- Activate Intel AMX support for int8 and bfloat16 data types using oneAPI performance libraries such as oneDNN, oneDAL, and oneCCL.
- Drive orders of magnitude for training and inference into TensorFlow and PyTorch AI frameworks which are powered by oneAPI and already optimised to enable Intel AMX.
- Deliver fast HPC applications that scale with techniques in vectorisation, multithreading, multi-node parallelisation, and memory optimisation using the Intel oneAPI Base Toolkit and Intel oneAPI HPC Toolkit.
- Deliver high-fidelity applications for scientific research, cosmology, motion pictures, and more that leverage all of the system memory space for even the largest data sets using the Intel oneAPI Rendering Toolkit.
Intel Data Center GPU Max Series
This product is designed for breakthrough performance in data-intensive computing models used in AI and HPC such as physics, financial, services, and life sciences. This is Intel’s highest performing, highest density discrete GPU—it has more than 100 billion transistors and up to 128 Xe cores, which is very impressive.
- Activate the hardware’s innovative features—Intel Xe Matrix Extensions, vector engine, Intel Xe Link, data type flexibility, and more—and realize maximum performance using oneAPI and AI Tools.
- Migrate CUDA* code to SYCL* for easy portability across multiple architectures—including the new GPU as well as those from other vendors—with code migration tools to simplify the process.
When coupled with the 2023 Intel oneAPI and AI tools, developers can create single-source, portable code that fully activates the advanced capabilities and built-in acceleration features of the new hardware.
A new service called Intel On Demand (formerly referred to as software-defined silicon, SDSi) provides customers with the option to have some of these accelerators turned on or upgraded post-purchase.
Performance leap in Inference & training
oneAPI tools benchmark test results for MLPerf* DeepCAM, commonly used in HPC data centers for the detection of hurricanes and atmospheric rivers in climate data.
Speeding Exascale material discovery
oneAPI tools benchmark test results for the Liquid Crystal workload in LAMMPS, a popular molecular dynamics code for life science and materials research.
2023 release
The latest oneAPI and AI 2023 tools continue to empower developers with performance and productivity, delivering optimised support for Intel’s upcoming portfolio of CPU and GPU architectures and advanced capabilities.
Learn more about what's new in the Intel Xeon and Max Series releases
Get a feel of the high-level benefits of this release in this short video:
You can also view a recording of the Keynote at Accelerate with Xeon launch event.
Below, we have put together some more of the release highlights announced by Intel and also featured in Tiffany Trader's HPC Wire blog...
Accelerated performance
This new processor is manufactured on the Intel 7 node (formerly known as 10nm) which offers a 1.53x average performance gain over the prior generation and a 2.9x average performance per watt efficiency improvement for targeted workloads using the new accelerators, according to Intel. Pretty impressive and very useful for developers working with large workloads.
Tiffany writes that the 56-core 8480+ top-of-bin two-socket (non-HBM) part – with 40% more cores than its Ice Lake counterpart – achieved gen-over-gen performance uplifts across a number of benchmarks, delivering a 1.5x improvement on Stream Triad, a 1.4x improvement for HPL and a 1.6x improvement on HPCG. Intel testing across over 12 real-world applications (including WRF, Black Scholes, Monte Carlo and OpenFoam) showed similar speedups, with the greatest gain for a physics workload, CosmoFlow (2.6x).
Tiffany goes on to add that the Max series CPU is the first x86 processor with integrated High Bandwidth Memory. It offers a 3.7x gain in performance for memory-bound workloads, according to Intel, and requires 68 percent less energy than “deployed competitive systems.”
Tiffany highlights that the Max series “Ponte Vecchio” GPU contains over 100 billion transistors in a 47-tile package with up to 128 Xe HPC cores. Again more performance improvements have been provided with this release. Find out more in Tiffany's blog.
The SKUs
The Sapphire Rapids family includes 52 SKUs grouped across 10 segments, inclusive of the Max series: 11 are optimised for 2-socket performance (8 to 56 cores, 150-350 watts), 7 for 2-socket mainline performance (12 to 36 cores, 150-300 watts), 10 of them target four- and eight-socket (8 to 60 cores, 195-350 watts), and there are 3 single-socket optimised parts (8 to 32 cores, 125-250 watts). There are also SKUs optimised for cloud, networking, storage, media and other workloads.
Security
In addition to performance improvements, the 4th Gen Intel Xeon Scalable processors have advanced security technologies to help protect data in an ever-changing landscape of threats while unlocking new opportunities for business insights.
Extensions
Watch the video below to get an introduction to a new feature on Intel Xeon Scalable processor Max Series and 4th gen Intel Xeon Scalable processors: Intel® Advanced Matrix Extensions (Intel AMX).
This new feature expands the use of CPUs for Artificial Intelligence workloads by adding hardware in the form of dedicated TILES and a set of matrix multiply instructions, or TMUL, to operate efficient matrix multiplication operations on those tiles.
Intel AMX supports INT8 and BF16 data types, which speed up deep learning training and inferencing, while AVX512 instructions continue support of FP32 and FP64 data types, which are used in classical machine learning workloads.
In this way, Intel Xeon Scalable processors will help you maximise your Xeon investment by doing more with your CPU.
It takes minimum effort to benefit from this new feature. You can use toolkits and frameworks that have been optimised to take advantage of Intel AMX.
How companies are using Intel Xeon processors
CERN researchers achieved faster inferencing using Intel DL Boost and oneAPI (Intel AI Analytics Toolkit) with Intel Xeon scalable processors.
Nasdaq is leveraging acceleration using the Intel Xeon Scalable platform to speed up computation for its high-performance advanced homomorphic encryption applications.
BeeKeeperAI and Intel’s privacy-preserving data-collaboration methods are accelerating healthcare innovation with AI and confidential computing.
Intel and Grey Matter
Grey Matter is an Intel Software Elite Reseller. Contact us about Intel oneAPI Toolkit licences and Priority Support. Fill out the form below.
Contact Grey Matter
If you have any questions or want some extra information, complete the form below and one of the team will be in touch ASAP. If you have a specific use case, please let us know and we'll help you find the right solution faster.
By submitting this form you are agreeing to our Privacy Policy and Website Terms of Use.
Leanne Bevan
Related News
Build Your Own Multi-Itinerary Optimisation Services
Learn how to build an itinerary optimisation service on Bing Maps and Azure Maps, using an open-source optimiser library and an array of distance matrix between a set of origins and destinations.
Grey Matter Christmas and New Year Opening Hours
We’re coming to the end of yet another year of enabling our customers to do what they do best through the range of software and services we have on offer! We hope you have a wonderful time celebrating with your...
Top Geospatial Tools to Leverage Location Intelligence in the New Year
The ultimate guide for the top geospatial tools and APIs from leading map providers like Azure Maps and HERE Technologies.
Sophos Firewall XG Series EOL and XGS Migration
The Sophos XG Series hardware appliances will reach their end of life (EOL) on 31 March 2025. Now is the time to upgrade to the XGS Series Sophos Firewall Sophos recommends that you migrate to the XGS series. Reasons to...