AMD Ryzen 7nm CPUs May Not Hit Maximum Boost Frequency on All Cores

This site may earn affiliate commissions from the links on this page. Terms of use.

AMD’s third-generation Ryzen processors have been a massive hit for the company by all reports, with excellent performance relative to Intel’s Core CPUs. There have, however, been a few questions around yield, overclocking, and boost frequencies. CPU overclocks on Ryzen are notably low, and some enthusiasts have noticed a limited number of cores on their CPUs hit the targeted boost frequencies.

Tom’s Hardware has done a significant deep-dive into this issue and came away with a number of key findings. In the past, AMD CPUsSEEAMAZON_ET_135 See Amazon ET commerce were capable of hitting their top-rated boost frequencies on any CPU cores. Intel chips are designed similarly. With Ryzen 3000, apparently only up to one core needs to be capable of hitting its maximum or near-maximum boost frequency. The scheduler updates baked into Windows 10 were said to speed power state transitions (which they do), but they also assign workloads specifically to the fastest cores capable of hitting a given clock.

These findings may explain why all-core overclocking headroom on these new Ryzen 7 processors is so low. On the Ryzen 7 3600X, only one CPU core proved capable of hitting 4.35GHz, for example, with other cores on the same chip boosting to 75-100MHz lower. AMD has not released exact specs for what frequencies its cores need to be able to hit to satisfy its own internal metrics for launch, which means we don’t really “know” which frequencies these CPU cores will operate at. This is definitely a change from previous parts, where all cores could be more-or-less assumed to be capable of hitting the same boost frequencies, and it may have implications for overclockers — but it doesn’t really change my opinion on AMD’s 7nm Ryzen CPUs. If anything, I suspect it’s a harbinger of where the industry is headed in the future.

Building Faster Silicon Today Means Working Smarter, Not Harder

One of the topics I’ve covered repeatedly at ExtremeTech is the difficulty of scaling either IPC (instructions-per-clock, a measure of CPU efficiency) or clock speed as process technology continues to shrink. From December 2018 – June 2019, I wrote a number of articles pushing back against various AMD fans who insisted the company would use 7nm to make huge clock leaps above Intel. When we met AMD at E3 2019, company engineers told us point-blank that they expected no clock improvements at 7nm whatsoever initially, and were very pleased to be able to improve clocks modestly in the final design.

One of the major difficulties semiconductor foundries are dealing with on 7nm and lower nodes is increased variability. Increased variation in parts means the chance of getting a wider “spread” on which cores are capable of running at specific frequency and voltage settings. AMD adapted Adaptive Voltage and Frequency Scaling back with Carrizo in part because AVFS can be used to control for process variation by more precisely matching CPU internal voltages with the specific requirements of the processor. Working with Microsoft to ensure Windows runs workloads on the highest-clocked CPU core isn’t just a good idea; it’s going to be a necessary method of extracting maximum performance in the future.

Intel’s decision to introduce Turbo Boost with Sandy Bridge back in 2011 was one of the smartest moves the company ever made. Intel’s engineers accurately forecast it was going to become increasingly difficult to guarantee maximum clocks under all circumstances. There’s no arguing what AMD is doing here represents a fundamental shift from the company’s approach in years past, but it’s one I strongly believe we’re going to see more companies embracing in the future. Higher silicon variability is going to demand a response from software. The entire reason the industry has shifted towards chiplets is that building entire dies on 7nm is seen as a fool’s errand, given the way cost scales with large die sizes, as the slide below shows.

Why move to AVFS? To decrease variability. Why move to chiplets? To cut manufacturing costs and improve yields overall. Why change Windows scheduling to be aware of per-core boost frequencies? To ensure end-users receive the full measure of performance they pay for. While it’s true Intel CPUs may be able to hit boost frequencies on any core, that doesn’t mean this state of affairs was objectively better for the end-user. Windows’ typical core-shuffling is not some unalloyed good, a fact Paul Alcorn notes in his article. “Typically we would see more interspersed frequency jumps among cores,” Alcorn writes, “largely due to the Windows scheduler’s irritating and seemingly irrational tendency to allocate threads into different cores on a whim.” Meanwhile, we know the boost frequency Intel CPUs will practically hold still depends directly on how many CPU cores are being loaded. The fact that all CPU cores can reach higher clocks does not necessarily benefit the end-user in any way unless said user is overclocking — and statistically, most computer users don’t.

But because it’s getting harder to eke out frequency boosts and performance improvements, manufacturers are investing in technologies that tap the reservoir of performance in any given CPU solely for their own use. This is why high-end overclocking is slowly dying and has been for at least the past seven years. AMD and Intel are getting better and better at making limited frequency headroom in their products available to end-users without overclocking because overclocking these CPUs in the conventional fashion blows their power curve out so severely. It wouldn’t surprise me to discover AMD went with this method of clocking because it improved performance more at lower power compared with launching lower-clocked chips with a more conventional all-core boost arrangement.

The old rules of process node transitions and silicon designs have changed. That’s the bottom line. I’m confident we’ll see Intel deploying advanced tactics of its own to deal with these concerns in the future because there is zero evidence to suggest these issues are unique to AMD or TSMC. AMD’s adoption of AVFS, the rising use of chiplets across the industry, the lower expected clocks at 7nm that were turned into a small gain thanks to clever engineering — all of these issues point in the same direction. Companies will undoubtedly develop their own particular solutions, but everyone is grappling with the same fundamental set of problems.

Good Communication is Key

AMD, to its credit, did tell users they needed to be running the latest chipset driver and the Windows 1903 update to take advantage of the new scheduler. Implied in that rhetoric was not doing so would prevent you from seeing the full impact of third-generation Ryzen’s improved performance. I do agree the company should have disclosed this new binning strategy to the technical press at E3, so we could detail it during the actual review.

But does this change my overall evaluation of third-generation Ryzen? No. Not in any fashion. The work THG has done to explore this issue is quite thorough, but based on the reading I’ve done on the evolution of process technology in modern manufacturing, I come down firmly on the side of this being a good thing. It’s the extension of the same trend that led ARM to invent big.Little — namely, the idea that the OS needs to be more tightly coupled to the underlying hardware, with a greater awareness of what CPU cores should be used for which workloads in order to maximize performance and minimize power consumption.

According to AMD, roughly 25 percent of the performance improvements of the past decade have come from better compilers and improved power management. That percentage will likely be even larger 10 years from now. Power consumption at both idle and load is now the largest enemy of improved silicon performance, and variability in silicon process is a major cause of power consumption. Improving performance in the future is going to rely on different tools than the ones we’ve used for the past few decades, and one of the likely consequences of that push is the end of overclocking. Manufacturers can’t afford to leave 10, 20, 30 percent performance margins on the table any longer. Those margins represent a significant percentage of the total improvements they can offer.

Do these findings have implications for the currently limited availability on the Ryzen 9 3900X? We don’t know. Certainly, it’s possible the two are connected and that AMD is having trouble getting yield on the chip. Ultimately, I stand by what I said in our article on AMD CPUSEEAMAZON_ET_135 See Amazon ET commerce availability earlier today — we’ll give the company a little more time to get product into market and revisit the topic in a few more weeks. But the CPU’s performance is excellent. Its power consumption, particularly if paired with an X470 motherboard, is excellent. We’re still working on future Ryzen articles and have been working with these CPUs for several weeks. The performance and overall power characteristics are fundamentally strong, and while the THG findings are quite interesting for what they say about AMD’s overall strategy going forward and what I believe is the general increase in variability in semiconductors as a whole, I view them as broadly confirming the direction the industry is moving in. Dealing with intrinsically higher silicon variability will be one of the major challenges of the 2020s.

I hesitate to bring Intel into this conversation at all, because we haven’t even seen the company’s latest iteration of its 10nm process yet, but it’s surely no accident the company’s upcoming mobile processors have sharply reduced maximum Turbo Boosts (4.1GHz for Ice Lake, compared to 4.8GHz for 14nm Whiskey Lake). Some of that may be explained by the wider graphics core that’s built into Gen 11, but Intel forecast from the beginning that 14nm++ would be a better node for high-frequency chips than its initial 10nm process. That doesn’t mean Intel has adopted AMD’s new clocking method, but it does show the company is grappling with some of these same issues around frequency, variation, and power consumption, and working to find its own ideal balance.

The challenges are getting tougher. There are no more easy wins. The interplay between software and hardware is going to change in the future because the alternative — simply giving up and going home — isn’t a tenable one. That may have trickle-down effects that impact other aspects of computing, including overclockers and enthusiasts. But it doesn’t change the fact, in this reviewer’s opinion, that the Ryzen 7 3000 family are an excellent set of CPUs.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes.– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Apple Could Switch to ARM, But Replacing Xeon Is No Simple Endeavor

This site may earn affiliate commissions from the links on this page. Terms of use.

The question of when Apple will switch to building its own custom ARM CPU cores for its software ecosystem rather than using Intel and x86 comes up on a regular basis. On ET, we first covered the topic in 2011, and I’ve hit it several times in the intervening years. My answer has typically been some flavor of “theoretically yes, but practically (and in terms of the near future), no.”

A recent AppleInsider article does a good job of rounding up the reasons why Apple really might be taking this step soon. We’ve previously heard rumors that the company could launch such a product in 2020, and while rumors are not the same thing as a definite launch date, the piece is solid. It makes a reasonable case for why Apple may indeed take this step and references various real-world events, including Intel’s difficulties moving on past 14nm, Apple’s design efforts around GPUsSEEAMAZON_ET_135 See Amazon ET commerce and CPUs, the increasing complexity and capability of its SoCs, and the fact that Apple has built its own secondary chips, like the T2 controller.

All of these points are true, and it’s why I think the 2020 rumor deserves to be taken more seriously than the dates and ideas that we used to hear. But there is still a major piece of this puzzle that doesn’t get talked about often enough. Apple can introduce an ARM core running full macOS, but if it wants to replace x86 in its highest-end iMac Pro and Mac Pro products, it’s going to have to take on some significant design challenges that it hasn’t faced before.

Intel’s Skylake mesh interconnect. This is anything but easy to build and design.

Apple has built CPUs, yes. But it’s never tried to build, say, a 28-32-core ARM processor in a multi-socket system. To the best of my knowledge, Apple has never built a server-class chipset or designed a CPU socket for its own product families. During E3, I attended an AMD session on the evolution of its AM4 socket, and how carefully AMD had to work in order to design a 7nm product with chiplets to fit into a socket that initially deployed four identical CPU cores in a 28nm process node. Even if Apple intends to create a platform without upgradable CPUs, it will need to design its own motherboards. The socket design decisions that it makes will impact how quickly it can iterate the platform and how much work has to be done at a later time. Achievable? Absolutely. But not something one does overnight.

The routing on AMD Ryzen 7 3000 PCB. That’s the connection between one chiplet and its I/O die. This isn’t easy to design, either.

Using chiplets makes some aspects of CPU design easier, especially on leading-edge nodes, but it doesn’t simplify everything. Chiplets require interconnects, like AMD’s Infinity Fabric. Apple would need to design its own solution (there are no formal chiplet interconnect standards yet). There’s a lot of custom IP work to be done here if Apple wants to bring a part to market to replace what Intel offers in the Mac Pro.

One simple solution is for Apple to launch new ARM chips in laptops but keep desktop systems on Intel for the time being. In theory, this works fine, provided the ecosystem is ready for it and Apple can deliver appropriate binaries for applications. Software application support and user expectations could be tricky to manage here, but it’s doable. The problems for Apple, in this case, are making sure that its consumers understand any compatibility issues that might exist and that the new ARM-based products are clearly differentiated from the old x86 ones.

Is There a Reason for Apple Not to Build Its Own Mac CPUs?

There is, in fact, a reason for Apple not to build its own CPU cores for Mac. There is a non-trivial amount of work that must be done to launch a laptop/desktop processor line. Doing all of the work of developing interconnects, chiplets, chipsets, and motherboards from the ground up is more difficult and expensive than working with someone else’s pre-defined product standard and manufacturing. There’s an awful lot of work that Intel does on Core that Apple doesn’t have to do.

The question of whether it makes sense for Apple to move away from Intel CPUsSEEAMAZON_ET_135 See Amazon ET commerce is therefore partially predicated on what kind of money Apple thinks it can make as a result of doing so. Obviously capturing the value of the microprocessor can sweeten the cost structure, but capturing the value also means capturing the cost. When Apple was a non-x86 shop, its market share was significantly smaller than it is today, and the company gained some market share immediately after switching to x86. It is impossible to tell if it gained that share because its software compatibility was now much improved or because many of its systems, especially laptops, were now far more competitive with their Windows counterparts.

Apple has to consider that it will lose at least some customers if it moves away from x86 compatibility again, either because of software compatibility or because its new chips may not offer a performance improvement in specific workloads relative to Intel. The most valuable CPUs — the ones powering the Mac Pro — are also the most expensive to design and build. If Apple doesn’t think it can command the price premiums that Xeon does, it might hold off on introducing CPUs in these segments until it believes it can. Unlike 2005, when IBM couldn’t produce a G5 that fit into a laptop, Apple isn’t quite that pinched as far as market segments.

I think Apple’s CPUs have evolved enough to make a jump towards ARM and away from x86 plausible in a way it wasn’t back in 2014, but there are still some significant questions to be answered about where Apple would sell the part and whether it would attempt to replace x86 in all products, or in specific mobile SKUs. And, honestly, I think there’s a version of this story where Apple ultimately continues to work with Intel or AMD long into the future, having decidedly to deploy its own ARM IP strategically across the Mac line, or in secondary positions similar to how the T2 chip is used.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes.– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something