Intel Core i9-9900KS Ships in Oct., Cascade Lake-X Nearly Doubles Performance Per Dollar


This site may earn affiliate commissions from the links on this page. Terms of use.

Intel made some product announcements at a pre-IFA event in Berlin this week, including news on the Core i9-9900KS that it announced earlier this summer and an upcoming product refresh for its Core X family. Intel has been pushed onto its proverbial heels by AMD’s 7nm onslaught, and it has yet to respond to those products in a significant way. These new parts should help do that, albeit at the high end of the market.

First, there’s the Core i9-9900KS. This CPU is a specially-binned Core i9-9900K, with the ability to hit 5GHz on all eight CPU cores, and a 4GHz base clock. That’s a 1.1x improvement over base clock on the 9900K, but the impact of the all-core 5GHz boost is harder to estimate. A sustained all-core 5GHz clock speed would be substantially higher than the Core i9-9900K we have here at ET — but Intel CPUsSEEAMAZON_ET_135 See Amazon ET commerce no longer hold their full clocks under sustained load. Our Core i9-9900K will turbo up to high clocks for 20-30 seconds, depending on the workload, before falling back to speeds in the lower 4GHz range when run on our Asus Z390 motherboard.

A faster Core i9 will undoubtedly improve Intel’s positioning against the Ryzen 7 and Ryzen 9 family,SEEAMAZON_ET_135 See Amazon ET commerce but even a chip that could hold an all-core 5GHz boost won’t catch the 12-core/24-thread Ryzen 9 3900X in most multi-threaded applications that can scale up to 12 cores. The gap between the two parts is too large to be closed in such a manner.

What the 9900KS will do for Intel, however, is give it a little more room to maneuver in gaming performance, which is where the company is making its stand. On the desktop side of things, Intel is facing a genuinely tough competitive situation, and even the advent of 10-core desktop CPUs may not solve the problem.

Cascade Lake May Meaningfully Respond to Threadripper

For the past two years, AMD has hammered Intel with high-performing, (relatively) low-cost workstation processors. Even though Intel’s Skylake X CPUs have often punched above their weight class compared with the Core family, AMD’s willingness to shove tons of cores into its chips has secured it the lead as far as performance/dollar, as well as the absolute performance lead in many well-threaded applications.

Intel may intend to challenge this in a far more serious way this year. The company showed the following slide at IFA:

The implication of this slide is that Intel will launch new Cascade X CPUs at substantially lower per-core prices than it has previously offered. We say “implication,” however, because technically this is a slide of performance per dollar, not price. Imagine two hypothetical CPUs, one with a price of $1,000 and performance of 1x, while the other chip costs $1,500 and has 2x the performance of the first chip. The second chip is 1.5x more expensive than the first but offers 1.33x more performance/dollar.

With AMD potentially eyeing Threadripper CPUs with up to 64 cores, however, Intel may not feel it has a choice. We haven’t heard from AMD on this point yet, so much is up in the air. There seems to be a battle brewing in these segments — hopefully, Intel will bring a much more price-competitive series of parts to market.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Intel Is Suddenly Very Concerned With ‘Real-World’ Benchmarking


Since at least Computex, Intel has been raising concerns with reviewers about the types of tests we run, which applications reviewers tend to use, and whether those tests are capturing ‘real-world’ performance. Specifically, Intel feels that far too much emphasis is put on tests like Cinebench, while the applications that people actually use are practically ignored.

Let’s get a few things out of the way up-front.

Every company has benchmarks that it prefers and benchmarks that it dislikes. The fact that some tests run better on AMD versus Intel, or on Nvidia versus AMD, is not, in and of itself, evidence that the benchmark has been deliberately designed to favor one company or the other. Companies tend to raise concerns about which benchmarks reviewers are using when they are facing increased competitive pressure in the market. Those of you who think Intel is raising questions about the tests we reviewers collectively use partly because it’s losing in a lot of those tests are not wrong. But just because a company has self-interested reasons to be raising questions doesn’t automatically mean that the company is wrong, either. And since I don’t spend dozens of hours and occasional all-nighters testing hardware to give people a false idea of how it will perform, I’m always willing to revisit my own conclusions.

What follows are my own thoughts on this situation. I don’t claim to speak for any other reviewer other than myself.

Maxon-Cinema4D

One wonders what Maxon thinks of this, given that it was a major Intel partner at SIGGRAPH.

What Does ‘Real-World’ Performance Actually Mean?

Being in favor of real-world hardware benchmarks is one of the least controversial opinions one can hold in computing. I’ve met people who didn’t necessarily care about the difference between synthetic and real-world tests, but I don’t ever recall meeting someone who thought real-world testing was irrelevant. The fact that nearly everyone agrees on this point does not mean everyone agrees on where the lines are between a real-world and a synthetic benchmark. Consider the following scenarios:

  • A developer creates a compute benchmark that tests GPU performance on both AMD and Nvidia hardware. It measures the performance both GPU families should offer in CUDA and OpenCL. Comparisons show that its results map reasonably well to applications in the field.
  • A 3D rendering company creates a standalone version of its application to compare performance across CPUs and/or GPUs. The standalone test accurately captures the basic performance of the (very expensive) 3D rendering suite in a simple, easy-to-use test.
  • A 3D rendering company creates a number of test scenes for benchmarking its full application suite. Each scene focuses on highlighting a specific technique or technology. They are collectively intended to show the performance impact of various features rather than offering a single overall render.
  • A game includes a built-in benchmark test. Instead of replicating an exact scene from in-game, the developers build a demo that tests every aspect of engine performance over a several-minute period. The test can be used to measure the performance of new features in an API like DX11.
  • A game includes a built-in benchmark test. This test is based on a single map or event in-game. It accurately measures performance in that specific map or scenario, but does not include any data on other maps or scenarios.

You’re going to have your own opinion about which of these scenarios (if any) constitute a real-world benchmark, and which do not. Let me ask you a different question — one that I genuinely believe is more important than whether a test is “real-world” or not. Which of these hypothetical benchmarks tells you something useful about the performance of the product being tested?

The answer is: “Potentially, all of them.” Which benchmark I pick is a function of the question that I’m asking. A synthetic or standalone test that functions as a good model for a different application is still accurately modeling performance in that application. It may be a far better model for real-world performance than tests performed in an application that has been heavily optimized for a specific architecture. Even though all of the tests in the optimized app are “real-world” — they reflect real workloads and tasks — the application may itself be an unrepresentative outlier.

All of the scenarios I outlined above have the potential to be good benchmarks, depending on how well they generalize to other applications. Generalization is important in reviewing. In my experience, reviewers generally try to balance applications known to favor one company with apps that run well on everyone’s hardware. Oftentimes, if a vendor-specific feature is enabled in one set of data, reviews will include a second set of data with the same featured disabled, in order to provide a more neutral comparison. Running vendor-specific flags can sometimes harm the ability of the test to speak to a wider audience.

Intel Proposes an Alternate Approach

Up until now, we’ve talked strictly about whether a test is real-world in light of whether the results generalize to other applications. There is, however, another way to frame the topic. Intel surveyed users to see which applications they actually used, then presented us with that data. It looks like this:

Intel-Real-World

The implication here is that by testing the most common applications installed on people’s hardware, we can capture a better, more representative use-case. This feels intuitively true — but the reality is more complicated.

Just because an application is frequently used doesn’t make it an objectively good benchmark. Some applications are not particularly demanding. While there are absolutely scenarios in which measuring Chrome performance could be important, like the low-end notebook space, good reviews of these products already include these types of tests. In the high-end enthusiast context, Chrome is unlikely to be a taxing application. Are there test scenarios that can make it taxing? Yes. But those scenarios don’t reflect the way the application is most commonly used.

The real-world experience of using Chrome on a Ryzen 7 3800XSEEAMAZON_ET_135 See Amazon ET commerce is identical to using it on a Core i9-9900K.SEEAMAZON_ET_135 See Amazon ET commerce Even if this were this not the case, Google makes it difficult to keep a previous version of Chrome available for continued A/B testing. Many people run extensions and adblockers, which have their own impact on performance. Does that mean reviewers shouldn’t test Chrome? Of course it doesn’t. That’s why many laptop reviews absolutely do test Chrome, particularly in the context of browser-based battery life, where Chrome, Firefox, and Edge are known to produce different results. Fit the benchmark to the situation.

There was a time when I spent much more time testing many of the applications on this list than we do now. When I began my career, most benchmark suites focused on office applications and basic 2D graphics tests. I remember when swapping out someone’s GPU could meaningfully improve 2D picture quality and Windows’ UI responsiveness, even without upgrading their monitor. When I wrote for Ars Technica, I wrote comparisons of CPU usage during HD content decoding, because at the time, there were meaningful differences to be found. If you think back to when Atom netbooks debuted, many reviews focused on issues like UI responsiveness with an Nvidia Ion GPU solution and compared it with Intel’s integrated graphics. Why? Because Ion made a noticeable difference to overall UI performance. Reviewers don’t ignore these issues. Publications tend to return to them when meaningful differentiation exists.

I do not pick review benchmarks solely because the application is popular, though popularity may figure into the final decision. The goal, in a general review, is to pick tests that will generalize well to other applications. The fact that a person has Steam or Battle.net installed tells me nothing. Is that person playing Overwatch or WoW Classic? Are they playing Minecraft or No Man’s Sky? Do they choose MMORPGs or FPS-type games, or are they just stalled out in Goat Simulator 2017? Are they actually playing any games at all? I can’t know without more data.

The applications on this list that show meaningful performance differences in common tasks are typically tested already. Publications like Puget Systems regularly publish performance comparisons in the Adobe suite. In some cases, the reason applications aren’t tested more often is that there have been longstanding concerns about the reliability and accuracy of the benchmark suite that most commonly includes them.

I’m always interested in better methods of measuring PC performance. Intel absolutely has a part to play in that process — the company has been helpful on many occasions when it comes to finding ways to highlight new features or troubleshoot issues. But the only way to find meaningful differences in hardware is to find meaningful differences in tests. Again, generally speaking, you’ll see reviewers check laptops for gaps in battery life and power consumption as well as performance. In GPUs, we look for differences in frame time and framerate. Because none of us can run every workload, we look for applications with generalizable results. At ET, I run multiple rendering applications specifically to ensure we aren’t favoring any single vendor or solution. That’s why I test Cinebench, Blender, Maxwell Render, and Corona Render. When it comes to media encoding, Handbrake is virtually everyone’s go-to solution — but we check in both H.264 and H.265 to ensure we capture multiple test scenarios. When tests prove to be inaccurate or insufficient to capture the data I need, I use different tests.

The False Dichotomy

The much-argued difference between “synthetic” and “real-world” benchmarks is a poor framing of the issue. What matters, in the end, is whether the benchmark data presented by the reviewer collectively offers an accurate view of expected device performance. As Rob Williams details at Techgage, Intel has been only too happy to use Maxon’s Cinebench as a benchmark at times when its own CPU cores were dominating performance. In a recent post on Medium, Intel’s Ryan Shrout wrote:

Today at IFA we held an event for attending members of the media and analyst community on a topic that’s very near and dear to our heart — Real World Performance. We’ve been holding these events for a few months now beginning at Computex and then at E3, and we’ve learned a lot along the way. The process has reinforced our opinion on synthetic benchmarks: they provide value if you want a quick and narrow perspective on performance. We still use them internally and know many of you do as well, but the reality is they are increasingly inaccurate in assessing real-world performance for the user, regardless of the product segment in question.

Sounds damning. He follows it up with this slide:

Intel-OEM-Optimization

To demonstrate the supposed inferiority of synthetic tests, Intel shows 14 separate results, 10 of which are drawn from 3DMark and PCMark. Both of these apps are generally considered to be synthetic applications. When the company presents data on its own performance versus ARM, it pulls the same trick again:

Intel-versus-ARM

Why is Intel referring back to synthetic applications in the same blog post in which it specifically calls them out as a poor choice compared with supposedly superior “real-world” tests? Maybe it’s because Intel makes its benchmark choices just like we reviewers do — with an eye towards results that are representative and reproducible, using affordable tests, with good feature sets that don’t crash or fail for unknown reasons after install. Maybe Intel also has trouble keeping up with the sheer flood of software released on an ongoing basis and picks tests to represent its products that it can depend on. Maybe it wants to continue to develop its own synthetic benchmarks like WebXPRT without throwing that entire effort under a bus, even though it’s simultaneously trying to imply that the benchmarks AMD has relied on are inaccurate.

And maybe it’s because the entire synthetic-versus-real-world framing is bad to start with.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

AMD Sales Are Booming, but High-End Ryzen 3000 CPUs Still in Short Supply


This site may earn affiliate commissions from the links on this page. Terms of use.

After the Ryzen 3000 family debuted on 7nm, German retailer Mindfactory.de released data from its own CPU sales showing that demand for the smaller CPU manufacturer’s products had skyrocketed. That demand continued straight through August, but product shortages may be hampering overall sales.

Once again, Ingebor on Reddit has shared data on CPUSEEAMAZON_ET_135 See Amazon ET commerce sales, CPU revenue share, and average selling prices. The results are once again a major win for AMD, though overall shipments declined this month compared with July.

Mindfactory-Sept

While the absolute number of CPUs fell, AMD held virtually the same market share. Sales of second-generation products continue to be strong, even with third-gen Ryzen in-market. On the AMD side, shipments of the Ryzen 9 3900X fell, as did sales of the Ryzen 7 3700X, and 3800X. The Ryzen 5 3600 substantially expanded its overall market share. Intel shipments appear to have been virtually identical, in terms of which CPU SKUs were selling the best.

Mindfactory-Sept-Revenue

Now we look at the market in terms of revenue. Intel’s share is higher here, thanks to higher selling prices. The Ryzen 9 3900X made a significantly smaller revenue contribution in August, as did the Ryzen 7 3700X. Sometimes the revenue graphs show us a different side of performance compared with sales charts, but this month the two graphs generally line up as expected.

One place where the Ryzen 5 3600’s share gains definitely hit AMD is in terms of its average selling price. In June, AMD’s ASP in Euros was €238.89. In August, it slipped downwards, to €216.04, a decline of 10.5 percent. Intel’s ASPs actually improved slightly, from €296.87 to €308.36, a gain of ~4 percent. This could be read as suggesting that a few buyers saw what AMD had to offer and opted to buy a high-end Core CPUSEEAMAZON_ET_135 See Amazon ET commerce instead. And on Reddit, Ingebor notes that low availability on the Ryzen 9 3900X definitely hit AMD’s revenue share, writing:

Except for the 3900X, all Matisse CPUs where available for most of the time and sold pretty well (not so much the 3800X, which dropped in price sharply towards the end of the month). These shortages can be seen in the revenue drop and a lower average sales price compared to last month.

For most of the month, the 3900X was unavailable with a date of availability constantly pushed out by mindfactory. Seems like the amount of CPUs they got do not suffice to satisfy their backlog of orders. The next date is the 6th of September. Hopefully the next month will finally see some decent availability. Also it remains to be seen when the 3950X will start to sell and whether it will be in better supply.

Ingebor also noted that there’s been no hint of official Intel price cuts, despite rumors that the company might respond to 7nm Ryzen CPUs by enacting them.

The Limits of Retail Analysis

It’s incredibly useful that Mindfactory releases this information, but keep in mind that it represents sales at one company, in one country. We don’t doubt that AMD is seeing sales growth across its 7nm product lines, but the retail channel is a subset of the desktop market, and the desktop market is dwarfed by the laptop market.

Statista-PC-Market-Share

Data from Statista makes the point. Even if we ignore tablets, only about 36.7 percent of the computing market is desktops. Trying to estimate the size of the PC retail channel is difficult; figures I’ve seen in the past suggest it’s 10-20 percent of the space. If true, that would suggest Mindfactory, Newegg, Amazon, and similar companies collectively account for 3.6 to 7.3 percent of the overall PC market. AMD and Intel split this space, with the size of the split depending on the relative competitive standing of each company, hardware availability in the local market, and any country-specific preferences for one vendor versus the other.

This is why you’ll see websites write stories about how AMD is dominating sales at a specific retailer, followed by stories that show a relatively small gain in total market share. It’s not that either story is necessarily wrong; they capture different markets.

Overall, AMD is in a strong competitive position at the moment. Just keep in mind that data sets like this, while valuable and interesting, only capture a small section of the overall space.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

AMD’s Ryzen 3000 Family is Dominating Sales at European Retailer


This site may earn affiliate commissions from the links on this page. Terms of use.

Mindfactory, a major German computer hardware retailer, has published new sales data for the month of July. AMD has had an extremely good month, even by the standards of previous Ryzen launches.

Before we dive into the numbers, the usual caveats: These figures reflect data from a single German company, not the entire retail channel. Most companies don’t publish data like this. Data from Amazon and Newegg shows somewhat different splits on the best-selling CPU cores. Amazon has AMD occupying 12 of the Top 20 best-selling chips, but only three of the parts are based on Matisse, none higher than 6th place. Newegg has AMD holding 11 of the Top 20 spots, but the first Matisse CPU is in 12th place — the Ryzen 7 3600.

This is not to imply that the Mindfactory data is wrong, but it should not be read as speaking to the entire retail market.

Reddit user Ingebor has published Mindfactory sales data for the month of June. First up, unit sales:

That’s a very strong launch month for AMD, considering that the company didn’t even go on-sale until 7/7. While AMD’s market share grew 11 percentage points, it’s the increase in total processor shipments that reflects strong demand for the new parts. In June, Mindfactory sold ~9000 – 9500 AMD CPUs and ~4000 – 4500 Intel chips. In July, AMD appears to have sold ~18,500 CPUs and just shy of 5000 Intel CPUs. It looks as though Intel demand was driven by the 9900K, 9700K, and 9600K, implying that at least some Intel fans delayed purchases to see if AMD would bring something to the table that they wanted to purchase, then pulled the trigger on upgrades of their own. A great many shoppers, however, were clearly looking for something from Team Red. It’s good to see the 3900X on this list — the chip may be difficult to find right now, but this is evidence that parts are making it to market.

The previous slide focused on unit shipments, this slide captures earned revenue. This graph is remarkable for how small the gap is between Intel’s market share (21 percent) and its revenue (25 percent). Typically, Intel revenue share is much larger — compare the previous month, when Intel was 32 percent of unit shipments but 48 percent of revenue for an example of how this trend usually moves. In order for AMD to be doing this much better in terms of overall revenue share, the only explanation is that AMD’s ASPs have increased dramatically. Looking to the next chart, we see…

Exactly that. The last time we discussed Mindfactory data, the company was reporting an average selling price (ASP) for AMD hardware of 178€. Today, AMD’s ASPs stand at 238.89€. That’s an increase of 1.34x over April. Mindfactory reports a 1.5x increase over June. This kind of improvement is why AMD was focused on raising its ASPs and cutting costs with 7nm, to allow it to compete more effectively with Intel.

AMD’s most recent quarterly forecast doesn’t predict very strong revenue growth for the rest of the year, but it blames that weakness on a weaker-than-expected console cycle. AMD has stated that its gross margin on all 7nm products is over 50 percent. Excluding the impact of lower semicustom sales, AMD expects full year Q2019 revenue to be up 20 percent. Factor in semicustom, and total revenue growth is expected to be single-digit percentage.

Overall, the data suggests Ryzen is selling very well. Intel continues to have a bulwark with gamers who want single-threaded top-end gaming performance above all other options, but third-generation Ryzen closed the gap between both companies in that area as well.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something