Intel Is Suddenly Very Concerned With ‘Real-World’ Benchmarking


Since at least Computex, Intel has been raising concerns with reviewers about the types of tests we run, which applications reviewers tend to use, and whether those tests are capturing ‘real-world’ performance. Specifically, Intel feels that far too much emphasis is put on tests like Cinebench, while the applications that people actually use are practically ignored.

Let’s get a few things out of the way up-front.

Every company has benchmarks that it prefers and benchmarks that it dislikes. The fact that some tests run better on AMD versus Intel, or on Nvidia versus AMD, is not, in and of itself, evidence that the benchmark has been deliberately designed to favor one company or the other. Companies tend to raise concerns about which benchmarks reviewers are using when they are facing increased competitive pressure in the market. Those of you who think Intel is raising questions about the tests we reviewers collectively use partly because it’s losing in a lot of those tests are not wrong. But just because a company has self-interested reasons to be raising questions doesn’t automatically mean that the company is wrong, either. And since I don’t spend dozens of hours and occasional all-nighters testing hardware to give people a false idea of how it will perform, I’m always willing to revisit my own conclusions.

What follows are my own thoughts on this situation. I don’t claim to speak for any other reviewer other than myself.

Maxon-Cinema4D

One wonders what Maxon thinks of this, given that it was a major Intel partner at SIGGRAPH.

What Does ‘Real-World’ Performance Actually Mean?

Being in favor of real-world hardware benchmarks is one of the least controversial opinions one can hold in computing. I’ve met people who didn’t necessarily care about the difference between synthetic and real-world tests, but I don’t ever recall meeting someone who thought real-world testing was irrelevant. The fact that nearly everyone agrees on this point does not mean everyone agrees on where the lines are between a real-world and a synthetic benchmark. Consider the following scenarios:

  • A developer creates a compute benchmark that tests GPU performance on both AMD and Nvidia hardware. It measures the performance both GPU families should offer in CUDA and OpenCL. Comparisons show that its results map reasonably well to applications in the field.
  • A 3D rendering company creates a standalone version of its application to compare performance across CPUs and/or GPUs. The standalone test accurately captures the basic performance of the (very expensive) 3D rendering suite in a simple, easy-to-use test.
  • A 3D rendering company creates a number of test scenes for benchmarking its full application suite. Each scene focuses on highlighting a specific technique or technology. They are collectively intended to show the performance impact of various features rather than offering a single overall render.
  • A game includes a built-in benchmark test. Instead of replicating an exact scene from in-game, the developers build a demo that tests every aspect of engine performance over a several-minute period. The test can be used to measure the performance of new features in an API like DX11.
  • A game includes a built-in benchmark test. This test is based on a single map or event in-game. It accurately measures performance in that specific map or scenario, but does not include any data on other maps or scenarios.

You’re going to have your own opinion about which of these scenarios (if any) constitute a real-world benchmark, and which do not. Let me ask you a different question — one that I genuinely believe is more important than whether a test is “real-world” or not. Which of these hypothetical benchmarks tells you something useful about the performance of the product being tested?

The answer is: “Potentially, all of them.” Which benchmark I pick is a function of the question that I’m asking. A synthetic or standalone test that functions as a good model for a different application is still accurately modeling performance in that application. It may be a far better model for real-world performance than tests performed in an application that has been heavily optimized for a specific architecture. Even though all of the tests in the optimized app are “real-world” — they reflect real workloads and tasks — the application may itself be an unrepresentative outlier.

All of the scenarios I outlined above have the potential to be good benchmarks, depending on how well they generalize to other applications. Generalization is important in reviewing. In my experience, reviewers generally try to balance applications known to favor one company with apps that run well on everyone’s hardware. Oftentimes, if a vendor-specific feature is enabled in one set of data, reviews will include a second set of data with the same featured disabled, in order to provide a more neutral comparison. Running vendor-specific flags can sometimes harm the ability of the test to speak to a wider audience.

Intel Proposes an Alternate Approach

Up until now, we’ve talked strictly about whether a test is real-world in light of whether the results generalize to other applications. There is, however, another way to frame the topic. Intel surveyed users to see which applications they actually used, then presented us with that data. It looks like this:

Intel-Real-World

The implication here is that by testing the most common applications installed on people’s hardware, we can capture a better, more representative use-case. This feels intuitively true — but the reality is more complicated.

Just because an application is frequently used doesn’t make it an objectively good benchmark. Some applications are not particularly demanding. While there are absolutely scenarios in which measuring Chrome performance could be important, like the low-end notebook space, good reviews of these products already include these types of tests. In the high-end enthusiast context, Chrome is unlikely to be a taxing application. Are there test scenarios that can make it taxing? Yes. But those scenarios don’t reflect the way the application is most commonly used.

The real-world experience of using Chrome on a Ryzen 7 3800XSEEAMAZON_ET_135 See Amazon ET commerce is identical to using it on a Core i9-9900K.SEEAMAZON_ET_135 See Amazon ET commerce Even if this were this not the case, Google makes it difficult to keep a previous version of Chrome available for continued A/B testing. Many people run extensions and adblockers, which have their own impact on performance. Does that mean reviewers shouldn’t test Chrome? Of course it doesn’t. That’s why many laptop reviews absolutely do test Chrome, particularly in the context of browser-based battery life, where Chrome, Firefox, and Edge are known to produce different results. Fit the benchmark to the situation.

There was a time when I spent much more time testing many of the applications on this list than we do now. When I began my career, most benchmark suites focused on office applications and basic 2D graphics tests. I remember when swapping out someone’s GPU could meaningfully improve 2D picture quality and Windows’ UI responsiveness, even without upgrading their monitor. When I wrote for Ars Technica, I wrote comparisons of CPU usage during HD content decoding, because at the time, there were meaningful differences to be found. If you think back to when Atom netbooks debuted, many reviews focused on issues like UI responsiveness with an Nvidia Ion GPU solution and compared it with Intel’s integrated graphics. Why? Because Ion made a noticeable difference to overall UI performance. Reviewers don’t ignore these issues. Publications tend to return to them when meaningful differentiation exists.

I do not pick review benchmarks solely because the application is popular, though popularity may figure into the final decision. The goal, in a general review, is to pick tests that will generalize well to other applications. The fact that a person has Steam or Battle.net installed tells me nothing. Is that person playing Overwatch or WoW Classic? Are they playing Minecraft or No Man’s Sky? Do they choose MMORPGs or FPS-type games, or are they just stalled out in Goat Simulator 2017? Are they actually playing any games at all? I can’t know without more data.

The applications on this list that show meaningful performance differences in common tasks are typically tested already. Publications like Puget Systems regularly publish performance comparisons in the Adobe suite. In some cases, the reason applications aren’t tested more often is that there have been longstanding concerns about the reliability and accuracy of the benchmark suite that most commonly includes them.

I’m always interested in better methods of measuring PC performance. Intel absolutely has a part to play in that process — the company has been helpful on many occasions when it comes to finding ways to highlight new features or troubleshoot issues. But the only way to find meaningful differences in hardware is to find meaningful differences in tests. Again, generally speaking, you’ll see reviewers check laptops for gaps in battery life and power consumption as well as performance. In GPUs, we look for differences in frame time and framerate. Because none of us can run every workload, we look for applications with generalizable results. At ET, I run multiple rendering applications specifically to ensure we aren’t favoring any single vendor or solution. That’s why I test Cinebench, Blender, Maxwell Render, and Corona Render. When it comes to media encoding, Handbrake is virtually everyone’s go-to solution — but we check in both H.264 and H.265 to ensure we capture multiple test scenarios. When tests prove to be inaccurate or insufficient to capture the data I need, I use different tests.

The False Dichotomy

The much-argued difference between “synthetic” and “real-world” benchmarks is a poor framing of the issue. What matters, in the end, is whether the benchmark data presented by the reviewer collectively offers an accurate view of expected device performance. As Rob Williams details at Techgage, Intel has been only too happy to use Maxon’s Cinebench as a benchmark at times when its own CPU cores were dominating performance. In a recent post on Medium, Intel’s Ryan Shrout wrote:

Today at IFA we held an event for attending members of the media and analyst community on a topic that’s very near and dear to our heart — Real World Performance. We’ve been holding these events for a few months now beginning at Computex and then at E3, and we’ve learned a lot along the way. The process has reinforced our opinion on synthetic benchmarks: they provide value if you want a quick and narrow perspective on performance. We still use them internally and know many of you do as well, but the reality is they are increasingly inaccurate in assessing real-world performance for the user, regardless of the product segment in question.

Sounds damning. He follows it up with this slide:

Intel-OEM-Optimization

To demonstrate the supposed inferiority of synthetic tests, Intel shows 14 separate results, 10 of which are drawn from 3DMark and PCMark. Both of these apps are generally considered to be synthetic applications. When the company presents data on its own performance versus ARM, it pulls the same trick again:

Intel-versus-ARM

Why is Intel referring back to synthetic applications in the same blog post in which it specifically calls them out as a poor choice compared with supposedly superior “real-world” tests? Maybe it’s because Intel makes its benchmark choices just like we reviewers do — with an eye towards results that are representative and reproducible, using affordable tests, with good feature sets that don’t crash or fail for unknown reasons after install. Maybe Intel also has trouble keeping up with the sheer flood of software released on an ongoing basis and picks tests to represent its products that it can depend on. Maybe it wants to continue to develop its own synthetic benchmarks like WebXPRT without throwing that entire effort under a bus, even though it’s simultaneously trying to imply that the benchmarks AMD has relied on are inaccurate.

And maybe it’s because the entire synthetic-versus-real-world framing is bad to start with.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Survey: Many AMD Ryzen 3000 CPUs Don’t Hit Full Boost Clock


Overclocker Der8auer has published the results of a survey of more than 3,000 Ryzen 7nm owners who have purchased AMD’s new CPUs since they went on sale in July. Last month, reports surfaced that the Ryzen 3000 family weren’t hitting their boost clocks as well as some enthusiasts expected. Now, we have some data on exactly what those figures look like.

There are, however, two confounding variables. First, Der8auer had no way to sort out which AMD users had installed Windows 1903 and were using the most recent version of the company’s chipset drivers. AMD recommends both to ensure maximum performance and desired boost behavior. Der8auer acknowledges this but believes the onus is on AMD to communicate with end-users regarding the need to use certain Windows versions to achieve maximum performance.

Second, there’s the fact that surveys like this tend to be self-selecting. It’s possible that only the subset of end-users who aren’t seeing the performance they desire will respond in such a survey. Der8auer acknowledges this as well, calling it a very valid point, but believes that his overall viewing community is generally pro-AMD and favorably inclined towards the smaller CPU manufacturer. The full video can be seen below; we’ve excerpted some of the graphs for discussion.

Der8auer went over the data from the survey thoroughly in order to throw out results that didn’t make sense or were obviously submitted in bad faith. He compiled data on the 3600, 3600X, 3700X, 3800X, and 3900X.SEEAMAZON_ET_135 See Amazon ET commerce Clock distributions were measured at up to two deviations from the mean. Maximum boost clock was tested using Cinebench R15’s single-threaded test, as per AMD’s recommendation.

Der8auer-3600

Data and chart by Der8auer. Click to enlarge

In the case of the Ryzen 7 3600, 49.8 percent of CPUs hit their boost clock of 4.2GHz, as shown above. As clocks rise, however, the number of CPUs that can hit their boost clock drops. Just 9.8 percent of 3600X CPUs hit their 4.4GHz. The 3700X’s chart is shown below for comparison:

Data and chart by Der8auer. Click to enlarge

The majority of 3700X CPUs are capable of hitting 4.375GHz, but the 4.4GHz boost clock is a tougher leap. The 3800X does improve on these figures, with 26.7 percent of CPUs hitting boost clock. This seems to mirror what we’ve heard from other sources, which have implied that the 3800X is a better overclocker than the 3700X. The 3900X struggles more, however, with just 5.6 percent of CPUs hitting their full boost clock.

We can assume that at least some of the people who participated in this study did not have Windows 10 1903 or updated AMD drivers installed, but AMD users had the most reason to install those updates in the first place, which should help limit the impact of the confounding variable.

The Ambiguous Meaning of ‘Up To’

Following his analysis of the results, Der8auer makes it clear that he still recommends AMD’s 7nm Ryzen CPUs with comments like “I absolutely recommend buying these CPUs.” There’s no ambiguity in his statements and none in our performance review. AMD’s 7nm Ryzen CPUs are excellent. But an excellent product can still have issues that need to be discussed. So let’s talk about CPU clocks.

The entire reason that Intel (who debuted the capability) launched Turbo Boost as a product feature was to give itself leeway when it came to CPU clocks. At first, CPUs with “Turbo Boost” simply appeared to treat the higher, optional frequency as their effective target frequency even when under 100 percent load. This is no longer true, for multiple reasons. CPUs from AMD and Intel will sometimes run at lower clocks depending on the mix of AVX instructions. Top-end CPUs like the Core i9-9900K may throttle back substantially when under full load for a sustained period of time (20-30 seconds) if the motherboard is configured to use Intel default power settings.

In other realms, like smartphones, it is not necessarily unusual for a device to never run at maximum clock. Smartphone vendors don’t advertise base clocks at all and don’t provide any information about sustained SoC clock under load. Oftentimes it is left to reviewers to typify device behavior based on post-launch analysis. But CPUs from both Intel and AMD have typically been viewed as at least theoretically being willing capable of hitting boost clock in some circumstances.

The reason I say that view is “theoretical” is that we see a lot of variation in CPU behavior, even over the course of a single review cycle. It’s common for UEFI updates to arrive after our testing has already begun. Oftentimes, those updated UEFIs specifically fix issues with clocking. We correspond with various motherboard manufacturers to tell them what we’ve observed and we update platforms throughout the review to make certain power behavior is appropriate and that boards are working as intended. When checking overall performance, however, we tend to compare benchmark results against manufacturer expectations as opposed to strictly focusing on clock speed (performance, after all, is what we are attempting to measure). If performance is oddly low or high, CPU and RAM clocks are the first place to check.

It’s not unusual, however, to be plus-or-minus 2-3 percent relative to either the manufacturer or our fellow reviewers, and occasional excursions of 5-7 percent may not be extraordinary if the benchmark is known for producing a wider spread of scores. Some tests are also more sensitive than others to RAM timing, SSD speed, or a host of other factors.

Now, consider Der8auer’s data on the Ryzen 9 3900X:

Der8auer-3900X

Image and data by Der8auer. Click to enlarge

Just 5 percent of the CPUs in the batch are capable of hitting 4.6GHz. But a CPU clocked at 4.6GHz is just 2 percent faster than a CPU clocking in at 4.5GHz. A 2 percent gap between two products is close enough that we call it an effective tie. If you were to evaluate CPUs strictly on the basis of performance, with a reasonable margin of say, 3 percent, you’d wind up with an “acceptable” clock range of 4,462MHz – 4,738MHz (assuming a 1:1 relationship between CPU clock and performance). And if you allow for that variance in the graphs above, a significantly larger percentage — though no, not all — of AMD CPUs “qualify” as effectively reaching their top clock.

On the other hand, 4.5GHz or below is factually not 4.6GHz. There are at least two meaningfully different ways to interpret the meaning of “up to” in this context. Does “up to X.XGHz” mean that the CPU will hit its boost clock some of the time, under certain circumstances? Or does it mean that certain CPUs will be able to hit these boost frequencies, but that you won’t know if you have one or not? And how much does that distinction matter, if the overall performance of the part matches the expected performance that the end-user will receive?

Keep in mind that one thing these results don’t tell us is what overall performance looks like across the entire spread of Ryzen 7 CPUs. Simply knowing the highest boost clock that the CPU hits doesn’t show us how long it sustained that clock. A CPU that holds a steady clock of 4.5GHz from start to finish will outperform a CPU that bursts to 4.6GHz for one second and drops to 4.4GHz to finish the workload. Both of these behaviors are possible under an “up to” model.

Manufacturers and Consumers May See This Issue Differently

While I don’t want to rain on his parade or upcoming article, we’ve spent the last few weeks at ET troubleshooting a laptop that my colleague David Cardinal recently bought. Specifically, we’ve been trying to understand its behavior under load when both the CPU and GPU are simultaneously in-use. Without giving anything away about that upcoming story, let me say this: The process has been a journey into just how complicated thermal management is now between various components.

Manufacturers, I think, increasingly look at power consumption and clock speed as a balancing act in which performance and power are allocated to the components where they’re needed and throttled back everywhere else. Increased variability is the order of the day. What I suspect AMD has done, in this case, is set a performance standard that it expects its CPUs to deliver rather than a specific clock frequency target. If I had to guess at why the company has done this, I would guess that it’s because of the intrinsic difficulties of maintaining high clock speeds at lower process nodes. AMD likely chose to push the envelope on its clock targets because it made the CPUs compare better against their Intel equivalents as far as maximum clock speeds were concerned. Any negative response from critics would be muted by the fact that these new CPUs deliver marked benefits over both previous-generation Ryzen CPUs and their Intel equivalents at equal price points.

Was that the right call? I’m not sure. This is a situation where I genuinely see both sides of the issue. The Ryzen 3000 family delivers excellent performance. But even after allowing for variation caused by Windows version, driver updates, or UEFI issues on the part of the manufacturer, we don’t see as many AMD CPUs hitting their maximum boost clocks as we would expect, and the higher-end CPUs with higher boost clocks have more issues than lower-end chips with lower clocks. AMD’s claims of getting more frequency out of TSMC 7nm as compared with GF 12/14nm seem a bit suspect at this point. The company absolutely delivered the performance gains we wanted, and the power improvements on the X470 chipset are also very good, but the clocking situation was not detailed the way it should have been at launch.

There are rumors that AMD supposedly changed boost behavior with recent AGESA versions. Asus employee Shamino wrote:

i have not tested a newer version of AGESA that changes the current state of 1003 boost, not even 1004. if i do know of changes, i will specifically state this. They were being too aggressive with the boost previously, the current boost behavior is more in line with their confidence in long term reliability and i have not heard of any changes to this stance, tho i have heard of a ‘more customizable’ version in the future.

I have no specific knowledge of this situation, but this would surprise me. First, reliability models are typically hammered out long before production. Companies don’t make major changes post-launch save in exceptional circumstances, because there is no way to ensure that the updated firmware will reach the products that it needs to reach. When this happens, it’s major news. Remember when AMD had a TLB bug in Phenom? Second, AMD’s use of Adaptive Frequency and Voltage Scaling is specifically designed to adjust the CPU voltage internally to ensure clock targets are hit, limiting the impact of variability and keeping the CPU inside the sweet spot for clock.

I’m not saying that AMD would never make an adjustment to AGESA that impacted clocking. But the idea that the company discovered a critical reliability issue that required it to make a subtle change that reduced clock by a mere handful of MHz in order to protect long-term reliability doesn’t immediately square with my understanding of how CPUs are designed, binned and tested. We have reached out to AMD for additional information.

I’m still confident and comfortable recommending the Ryzen 3000 family because I’ve spent a significant amount of time with these chips and seen how fast they are. But AMD’s “up to” boost clocks are also more tenuous than we initially knew. It doesn’t change our expectation of the part’s overall performance, but the company appears to have decided to interpret “up to” differently this cycle than in previous product launches. That shift should have been communicated. Going forward, we will examine both Intel and AMD clock behavior more closely as a component of our review coverage.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Welcome to the Second Golden Age of AMD


On Wednesday, August 7, AMD launched the 7nm refresh of its Epyc CPU family. These new cores don’t just one-up Intel in a particular category, they deliver enormous improvements in every category. AMD has cut its per-core pricing, increased IPC, and promises to deliver far more CPU cores than an equivalent Intel socket.

There’s only been one other time that AMD came close to beating Intel so decisively — the introduction of dual-core Opteron and Athlon 64 X2 in 2005. Epyc’s launch this week feels bigger. In 2005, AMD’s dual cores matched Intel on core count, outperformed Intel clock-for-clock and core-for-core, and were quite expensive. This time, AMD is going for the trifecta, with higher performance, more cores, and lower per-core pricing. It’s the most serious assault on Intel’s high-end Xeon market that the company has ever launched.

Industry analysts have already predicted that AMD’s server market share could double within the next 12 months, hitting 10 percent by Q2 2020. Achieving larger share in the data center market is a critical goal for AMD. A higher share of the enterprise and data center market won’t just increase in AMD’s revenue, it’ll help stabilize the company’s financial performance. One of AMD’s critical weaknesses for the last two decades has been its reliance on low-end PCs and retail channel sales. Both of these markets tend to be sensitive to recessions. The low-end PC market also offers the least revenue per-socket and the smallest margins. Enterprise business cycles are less impacted by downturns. AMD briefly achieved its goal of substantial enterprise market share in 2005 – 2006, when its server market share broke 20 percent.

Enthusiasts like to focus on AMD’s desktop performance, but outside of gaming, overall PC sales are declining. Growth in narrow categories like 2-in-1’s has not been sufficient to offset the general sales decline. While no one expects the PC market to fail, it’s clear that the 2011 downturn was not a blip. It still makes sense for AMD to fight to expand its share of the desktop and mobile markets, but it makes even more sense to fight for a share of the server space, where revenue and unit shipments have both grown over the past 8 years. 2019 may be a down year for server sales but the larger trend towards moving workloads into the cloud shows no signs of slowing down.

Why Rome is a Threat to Intel

In our discussions of Rome, we’ve focused primarily on the Epyc 7742. This graph, from ServetheHome, shows Epyc versus Xeon performance across more SKUs. Take a look down the stack:

AMD-EPYC-7002-Linux-Kernel-Compile-Benchmark-Result

Data and graph by ServeTheHome

A pair of AMD Epyc 7742’s is $13,900. A brace of 7502’s (32C/64T, 2.5GHz base, 3.35GHz boost, $2600) is $5200. The Intel Xeon Platinum 8260 is a $4700 CPU, but there are four of them in the highest-scoring system, for a total cost of $18,800. $13,900 worth of AMD CPUs buys you ~1.19x more performance than $18,800 worth of Intel CPUs. The comparison doesn’t get better as we drop down the stack. Four E7-8890v4’s would run nearly $30,000 at list price. A pair of Platinum 8280s is $20,000. The 8676L is a $16,600 CPU at list price.

But it’s not just price, or even price/performance where AMD has an advantage. Intel heavily subdivides its product features and charges considerably more for them. Consider, for example, the price difference between the Xeon 8276, 8276M and Xeon Platinum 8276L. These three CPUs are identical, save for the maximum amount of RAM each supports. The pricing, however, is anything but.

Xeon-Comparison

Oh, you need 4.5TB of RAM? That’ll be an extra $8K.

In this case, “Maximum memory” includes Intel Optane. The 4.5TB RAM capability assumes 3TB of Optane installed alongside 1.5TB of RAM. For comparison, all 7nm Rome CPUs offer support for up to 4TB of RAM. It’s a standard, baked-in feature on all CPUs, and it simplifies product purchases and future planning. AMD isn’t just offering chips at lower prices, it’s taking a bat to Intel’s entire market segmentation method. Good luck justifying an $8000 price increase for additional RAM support when AMD is willing to sell you 4TB worth of addressable capacity at base price.

One of AMD’s talking points with Epyc is how it offers the benefits of a 2S system in a 1S configuration. This chart from ServetheHome lays out the differences nicely:

AMD-EPYC-7002-v-2nd-Gen-Intel-Xeon-Scalable-Top-Line-Comparison

Image by ServeTheHome

Part of AMD’s advantage here is that it can hit multiple Intel weaknesses simultaneously. Need lots of PCIe lanes? AMD is better. Want PCIe 4.0? AMD is better. If your workloads scale optimally with cores, no one is selling more cores per socket than AMD. Intel can still claim a few advantages — it offers much larger unified L3 caches than AMD (each individual AMD L3 cache is effectively 16MB, with a 4MB slice per core). But those advantages are going to be limited to specific applications that respond to them. Intel wants vendors to invest in building support for its Optane DC Persistent Memory, but it isn’t clear how many are doing so. The current rock-bottom prices for both NAND and DRAM have made it much harder for Optane to compete in-market.

The move to 7nm has given AMD an advantage in power consumption as well, particularly when you consider server retirements. STH reports single-threaded power consumption on a Xeon Platinum 8180 at ~430W (wall power), compared to ~340W of wall power for the AMD Epyc 7742 system. What they note, however, is that the high core count on AMD’s newest CPUs will allow them to retire between 6-8 sockets worth of 2017 Intel Xeons (60-80 cores) in order to consolidate the workloads into a single AMD Epyc system. The power savings from retiring 3-4 dual-socket servers is much larger than the ~90W difference between the two CPUs.

Features like DL Boost may give Intel a performance kick in AI and machine learning workloads, but the company is going to be fighting a decidedly uphill battle and thus far, the data we’ve seen suggests these factors can help Intel match AMD as opposed to beating it.

How Much Do Xeon’s Really Cost?

The list prices we’ve been quoting for this story are the formal prices that Intel publishes for Xeon CPUs in 1K units. They are also widely known to be inaccurate, at least as far as the major OEMs are concerned. We don’t know what Dell, HPE, and other vendors actually pay for Xeon CPUs, but we do know it’s often much less than list price, which is typically paid only by the retail channel.

The gap between Intel list prices and actual prices may explain why Threadripper hasn’t had much market penetration. Despite the fact that Threadripper CPUs have offered vastly more cores per $ and higher performance per dollar for two years now, the OEMs that share sales information, like MindFactory, report very low sales of both Threadripper and Skylake-X. Intel, however, has also shown no particular interest in slashing Core X prices. It continues to position a 10-core Core i9-9820X as appropriate competition for chips like the Threadripper 2950X, despite AMD’s superior performance in that match-up. This strongly implies that Intel is having no particular trouble selling 10-core CPUs to the OEM partners that want them, despite Threadripper’s superior price/performance ratio and that AMD’s share of the workstation market is quite limited.

While Intel has trimmed its HEDT prices (the 10-core Core i7-6950X was $1723 in 2016, compared to $900 for a Core i9-9820X today), it has never attempted to price/performance match against Threadripper. If that bulwark is going to crumble, Rome will be the CPU that does it. Ryzen and Threadripper will be viewed as more credible workstation CPUs if Epyc starts chewing into the server market.

Intel is Playing AMD’s Game Now

Intel can cut its prices to respond to AMD in the short-term. Long-term, it’s going to have to challenge AMD directly. That’s going to mean delivering more cores at lower prices, with higher amounts of memory supported per socket. Cooper Lake, which is built on 14nm and includes additional support for new AI-focused AVX-512 instructions, will arrive in the first half of next year. That chip will help Intel focus on some of the markets it wants to compete in, but it won’t change the core count differential between the two companies. Similarly, Intel may have trouble putting a $3000 – $7000 premium on support for 2TB – 4.5TB of RAM given that AMD is willing to support up to 4TB of memory on every CPU socket.

We don’t know yet if Intel will increase core counts with Ice Lake servers, or what sorts of designs it will bring to market, but ICL in servers is at least a year away. By the time ICL servers are ready to ship, AMD’s 7nm EUV designs may be ready as well. Having kicked off the mother of all refresh cycles with Rome, AMD’s challenge over the next 12 – 24 months will be demonstrating ongoing smooth update cadences and continued performance improvements. If it does, it has a genuine shot at building the kind of stable enterprise market it’s desired for decades.

Don’t Get Cocky

When AMD launched dual-core Opteron and its consumer equivalent, the Athlon 64 X2, there was a definite sense that the company had finally arrived. Just over a year later, Intel launched the Core 2 Duo. AMD spent the next 11 years wandering in the proverbial wilderness. Later, executives would admit that the company had taken its eye off the ball and become distracted with the ATI acquisition. A string of problems followed.

The simplistic assumption that the P4 Prescott was a disaster Intel couldn’t recover from proved incorrect. Historically, attacking Intel has often proven akin to hitting a rubber wall with a Sledgehammer (pun intended). Deforming the wall is comparatively easy. Destroying it altogether is a far more difficult task. AMD has perhaps the best opportunity to take market share in the enterprise that it has ever had with 7nm Epyc, but building server share is a slow and careful process, not a wind sprint. If AMD wants to keep what it’s building this time around, it needs to play its cards differently than it did in 2005 – 2006.

But with that said, I don’t use phrases like “golden age” lightly. I’m using it now. While I make no projections on how long it will last, 7nm Epyc’s debut has made it official, as far as I’m concerned: Welcome to the second golden age of AMD.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

AMD’s Ryzen 3000 Family is Dominating Sales at European Retailer


This site may earn affiliate commissions from the links on this page. Terms of use.

Mindfactory, a major German computer hardware retailer, has published new sales data for the month of July. AMD has had an extremely good month, even by the standards of previous Ryzen launches.

Before we dive into the numbers, the usual caveats: These figures reflect data from a single German company, not the entire retail channel. Most companies don’t publish data like this. Data from Amazon and Newegg shows somewhat different splits on the best-selling CPU cores. Amazon has AMD occupying 12 of the Top 20 best-selling chips, but only three of the parts are based on Matisse, none higher than 6th place. Newegg has AMD holding 11 of the Top 20 spots, but the first Matisse CPU is in 12th place — the Ryzen 7 3600.

This is not to imply that the Mindfactory data is wrong, but it should not be read as speaking to the entire retail market.

Reddit user Ingebor has published Mindfactory sales data for the month of June. First up, unit sales:

That’s a very strong launch month for AMD, considering that the company didn’t even go on-sale until 7/7. While AMD’s market share grew 11 percentage points, it’s the increase in total processor shipments that reflects strong demand for the new parts. In June, Mindfactory sold ~9000 – 9500 AMD CPUs and ~4000 – 4500 Intel chips. In July, AMD appears to have sold ~18,500 CPUs and just shy of 5000 Intel CPUs. It looks as though Intel demand was driven by the 9900K, 9700K, and 9600K, implying that at least some Intel fans delayed purchases to see if AMD would bring something to the table that they wanted to purchase, then pulled the trigger on upgrades of their own. A great many shoppers, however, were clearly looking for something from Team Red. It’s good to see the 3900X on this list — the chip may be difficult to find right now, but this is evidence that parts are making it to market.

The previous slide focused on unit shipments, this slide captures earned revenue. This graph is remarkable for how small the gap is between Intel’s market share (21 percent) and its revenue (25 percent). Typically, Intel revenue share is much larger — compare the previous month, when Intel was 32 percent of unit shipments but 48 percent of revenue for an example of how this trend usually moves. In order for AMD to be doing this much better in terms of overall revenue share, the only explanation is that AMD’s ASPs have increased dramatically. Looking to the next chart, we see…

Exactly that. The last time we discussed Mindfactory data, the company was reporting an average selling price (ASP) for AMD hardware of 178€. Today, AMD’s ASPs stand at 238.89€. That’s an increase of 1.34x over April. Mindfactory reports a 1.5x increase over June. This kind of improvement is why AMD was focused on raising its ASPs and cutting costs with 7nm, to allow it to compete more effectively with Intel.

AMD’s most recent quarterly forecast doesn’t predict very strong revenue growth for the rest of the year, but it blames that weakness on a weaker-than-expected console cycle. AMD has stated that its gross margin on all 7nm products is over 50 percent. Excluding the impact of lower semicustom sales, AMD expects full year Q2019 revenue to be up 20 percent. Factor in semicustom, and total revenue growth is expected to be single-digit percentage.

Overall, the data suggests Ryzen is selling very well. Intel continues to have a bulwark with gamers who want single-threaded top-end gaming performance above all other options, but third-generation Ryzen closed the gap between both companies in that area as well.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something