AMD has launched its 7nm “Rome” series of Epyc server CPUs, with up to 64 cores, 128 threads, 225W TDPs, and a maximum clock speed of up to 3.4GHz. While third-generation Ryzen has lit up the enthusiast boards and driven extremely strong channel sales in the last month, the server market is where AMD truly wants to play. The server market, in many respects, is where it’s at.
And while corporate launches are basically an invitation for a company to make aggressive claims in the friendliest environment on Earth, the specific claims that AMD is making are eye-opening. AMD claims that Epyc sets no fewer than 80 new world records for CPU performance as measured in a wide range of industry-standard benchmarks, with the Epyc 7742 delivering 97 percent higher performance than Intel’s Xeon Platinum 8280L in peak SPECint 2017. Additional performance claims are shown below:
Image by Anandtech
Some of these gains will be familiar to those who have followed AMD’s 7nm Ryzen unveils. Just as Ryzen will shortly reach up to 16 cores on desktop, Epyc will now field up to 64 cores. The addition of 256-bit AVX2 registers to the Ryzen design means that AMD CPUs now offer up to 4x the floating-point performance of Epyc 1. Intel isn’t going to have an easy time of countering this — Cascade Lake is already in-market for the year, and Cooper Lake will drop in early 2020. This is why Intel CEO Bob Swan started acknowledging that his company expects a more competitive AMD several months ago. The writing has been on the proverbial wall.
Single-threaded workloads have an average improved IPC of 1.15x at the same frequency, while 32-core / 64-thread workload uplift is even higher, at 1.23x. The maximum gain AMD saw from IPC and efficiency improvements on a 32-core CPU was up to 1.4x, though this should be considered an unusual result. As previously reported, Epyc includes 128 PCIe lanes, PCIe 4.0 support, and can load up to 8TB of DDR4-3200.
The company is trying to make a lot of hay over its 2S deployment capabilities, claiming that a 2S AMD Epyc configuration offers a 44 percent lower TCO (total cost of ownership), allows for a 45 percent reduction in total servers (thanks to higher CPU counts) and offers 83 percent more performance (thanks to a combination of higher core counts and higher performance). AMD is arguing that their single-socket configuration offers I/O and overall performance equivalent to a dual-socket Xeon. Depending on the application and scenario, they may well be right. Intel’s dual sockets top out at 56 cores, AMD can do a single-socket 64-core system.
This approach has historical merit. Back in the early 2000s, AMD’s Opteron was a strong server competitor for Xeon from the beginning, but it was particularly strong in markets that used multi-socket systems. AMD’s “glueless” server architecture allowed it to attach cores directly to each other using HyperTransport, while Intel CPUs were connected to — and severely bottlenecked by — a common, shared, front-side bus. Single-socket servers were already quite popular in the early 2000s, but while the 2S and 4S markets were smaller, they were extremely lucrative. AMD eventually took approximately 20 percent of the server market from Intel in 2005 – 2006 before its decline began, but its earliest and largest successes were in the high core-count servers where its products had the greatest advantage over Intel in terms of relative feature sets.
The situation today is not identical, but it is analogous. Again, we see AMD putting in particular effort to make certain its top-end parts are difficult or impossible for Intel to match. A 2S AMD Rome deployment packs up to 128 cores. The Cascade Lake-AP servers that Intel sells are BGA-only and by all accounts, exceptionally expensive. Unless you use Cascade Lake-AP, you’re limited to 28 cores in an Intel socket. AMD can sell you 64.
Anandtech has a detailed review of the Epyc 7nm launch hardware, and the results fully live up to the expectations. Even in AVX-512 applications intended for the HPC market, dual Epyc 7742 is capable of matching dual Intel Xeon Platinum 8280 CPUs.
Image by Anandtech.
This is literally one of the most Intel-friendly benchmark runs you could possibly arrange. With AVX-512 on an optimized Intel rig, the 7742 is merely just as fast at a fraction of the price. Without those AVX-512 optimizations, AMD is 1.43x faster. Overall, AMD is offering 50-100 percent more performance than Intel in the server market, at a 40 percent lower price tag. According Anandtech, there is simply “no contest.”
Intel can cut its prices, to be sure. Beyond that, it has limited maneuverability. Ice Lake servers will not arrive for another year. Pricing on these cores is simply amazing, with a top-end Epyc 7742 selling for just $6950, or roughly $108 per core. An Intel Xeon Platinum 8280 has a list price of over $10,000 for a 28-core chip, just to put that in perspective. If you want a 32-core part, the Epyc 7502 packs 32 cores, 64 threads, higher IPC, and an additional 300MHz of frequency (2.5GHz base, versus 2.2GHz) for $2600 as opposed to the old price of $4200 for the 7601. AMD doesn’t segment its products the way Intel does, which means you get the full benefits of buying an Epyc part in terms of PCIe lanes and additional features. AMD also supports up to 4TB of RAM per socket. Intel tops out at 2TB per socket, and slaps a price premium on that level of RAM support.
In short? Epic Epyc win. Analysts are predicting the company’s market share in servers could double by mid-2020. Dell, Lenovo, and HPE have servers in the works. Epyc 1 was a test shot and a pipecleaner. Epyc 2, like Rome, wasn’t built in a day — but once constructed, it dominated the geopolitical landscape of the ancient world for centuries. Intel had best hope its rival’s new CPU doesn’t live up to the reputation of its namesake.