AMD Overtakes Nvidia in Graphics Shipments for First Time in 5 Years


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD saw its share of the graphics market surge in Q2 2019, with total shipments larger than Nvidia for the first time in five years. At the same time, Nvidia retains a hard lock on the add-in board market for desktops, with approximately two-thirds of total market share. And while these gains are significant, it’s also worth considering why they didn’t drive any particular “pop” in AMD’s overall financial figures for Q2.

First, let’s talk about the total graphics market. There are three players here: Intel, AMD, and Nvidia. Because this report considers the totality of the graphics space, and 2/3 of systems ship without a separate GPU,SEEAMAZON_ET_135 See Amazon ET commerce both AMD and Nvidia are minority players in this market. AMD, however, has an advantage — it builds CPUs with an onboard graphics solution, like Intel. Nvidia does not. Thus, we have to acknowledge that the total market space includes companies with a very different suite of products:

Intel: Integrated-only (until next year), no discrete GPUs, but accounts for a majority of total shipments.
AMD: Integrated GPUs and discrete cards, but with very little presence in upper-end mobile gaming.
Nvidia: No integrated solutions. Discrete GPUs only.

Graphics-Market-Share-JPR

According to JPR, AMD’s shipments increased by 9.8 percent, Intel shipments fell by 1.4 percent, and Nvidia shipments were flat, at 0.04 percent. This jives with reports from early in the year, which suggested that AMD would take market share from Intel due to CPU shortages. Separately from its global report, JPR also publishes a separate document on the desktop add-in board (AIB) market. This report only considers the discrete GPU space between Nvidia and AMD (Intel will compete in this space when it launches Xe next year). AMD and Nvidia split this space — and again, AMD showed significant growth, with a ten percent improvement in market share.

Image by Jon Peddie Research

If you pay attention to financial reports, however, you may recall that AMD’s Q2 2019 sales results were reasonable, but not spectacular. Both companies reported year-on-year sales declines. Nvidia’s fiscal year Q2 2020 results, which the company reported a few weeks back, showed gaming revenue falling 27 percent year-on-year. AMD doesn’t break out GPU and CPU sales — it combines them both into a single category — but its combined Compute and Graphics revenue reports were lower on a yearly basis as well:

AMD-Financial-Q2-2019

During the first half of the year, AMD was thought to be gaining market share at Intel’s expense, but these gains were largely thought to be at the low-end of the market. AMD launched its first Chromebooks with old Carrizo APUs, for example. This explains the growth in unit shipments in the total GPU space, as well as why the company didn’t show a tremendous profit from its gains. Growth in the AIB market may be explained by the sale of GPUs like the RX 570. This card has consistently been an incredibly good value — Nvidia didn’t bother distributing review GPUs for the GTX 1650 because the RX 570 is decisively faster, according to multiple reviews. But GPU sales have been down overall. According to JPR, AIB sales fell 16.6 percent quarter-to-quarter, and 39.7 percent year-on-year.

This explains why AMD’s strong market share gains didn’t translate to improved C&G sales revenue. The company earns less revenue on low-end sales compared with high-end cards. And its market share improvements have been overshadowed by a huge decline in AIB sales year-on-year, likely due to the combination of lingering crypto hangover and a weak overall enthusiast market in Q2.

Q3 will be a much more significant quarter for both companies. Not only does it typically improve on the basis of seasonality alone, but both Nvidia and AMD introduced price cuts and new products. AMD’s Navi powers the excellent 5700 and 5700 XT, which are both faster than the Nvidia refreshes of the RTX 2060 and RTX 2070 (now dubbed the RTX 2060 Super and RTX 2070 Super, respectively). Nvidia, in turn, offers ray tracing and variable rate shading — two features that are used in very few games today but may become more popular in the future. AMD lacks these features.

The two companies have staked out opposing strategies for boosting their respective market share. It’ll be interesting to see how consumers do or don’t respond to their separate value propositions.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

RTX 2080 vs. Radeon VII vs. 5700 XT: Rendering and Compute Performance


Most of our GPU coverage focuses on the consumer side of the business and on game benchmarking, but I promised to examine the compute side of performance back when the Radeon VII launched. With the 5700 XT having debuted recently, we had an opportunity to return to this question with a new GPU architecture from AMD and compare RDNA against GCN.

In fact, the overall compute situation is at an interesting crossroads. AMD has declared that it wishes to be a more serious player in enterprise compute environments but has also said that GCN will continue to exist alongside RDNA in this space. The Radeon VII is a consumer variant of AMD’s MI50 accelerator, with half-speed FP64 support. If you know you need double-precision FP64 compute, for example, the Radeon VII fills that niche in a way that no other GPU in this comparison does.

AMD-versus-Nvidia-Chart

The Radeon VII has the highest RAM bandwidth and it’s the only GPU in this comparison to offer much in the way of double-precision performance. But while these GPUs have relatively similar on-paper specs, there’s significant variance between them in terms of performance — and the numbers don’t always break the way you think they would.

One of AMD’s major talking points with the 5700 XTSEEAMAZON_ET_135 See Amazon ET commerce is now Navi represents a fundamentally new GPU architecture. The 5700 XT proved itself to be moderately faster than the Vega 64 in our testing on the consumer side of the equation, but we wanted to check the situation in compute as well. Keep in mind, however, that the 5700 XT’s newness also works against us a bit here. Some applications may need to be updated to take full advantage of its capabilities.

Regarding Blender 2.80

Our test results contain data from both Blender 2.80 and the standalone Blender benchmark, 1.0beta2 (released August 2018). Blender 2.80 is a major release for the application, and it contains a number of significant changes. The standalone benchmark is not compatible with Nvidia’s RTX family, which necessitated testing with the latest version of the software. Initially, we tested the Blender 2.80 beta, but then the final version dropped — so we dumped the beta results and retested.

Image by Blender

There are significant performance differences between the Blender 1.0beta2 benchmark and 2.80 and one scene, Classroom, does not render properly in the new version. This scene has been dropped from our 2.80 comparisons. Blender allows the user to specify a tile size in pixels to control how much of the scene is worked on at once. Code in the Blender 1.0beta2 benchmark’s Python files indicates that the test uses a tile size of 512×512 (X/Y coordinates) for GPUs and 16×16 for CPUs. Most of the scene files actually contained within the benchmark, however, actually use a tile size of 32×32 by default if loaded within Blender 2.80.

We tested Blender 2.80 in two different modes. First, we tested all compatible scenes using the default tile size those scenes loaded with. This was 16×16 for Barbershop_Interior, and 32×32 for all other scenes. Next, we tested the same renders with a default tile size of 512×512. Up until now, the rule with tile sizes has been that larger sizes were good for GPUs, while smaller sizes were good for CPUs. This appears to have changed somewhat with Blender 2.80. AMD and Nvidia GPUs show very different responses to larger tile sizes, with AMD GPUs accelerating with higher tile sizes and Nvidia GPUs losing performance.

Because the scene files we are testing were created in an older version of Blender, it’s possible that this might be impacting our overall results. We have worked extensively with AMD for several weeks to explore aspects of Blender performance on GCN GPUs. GCN, Pascal, Turing, and RDNA all show a different pattern of results when moving from 32×32 to 512×512, with Turing losing less performance than Pascal and RDNA gaining more performance in most circumstances than GCN.

All of our GPUs benefited substantially from not using a 16×16 tile size for Barbershop_Interior. While this test defaults to 16×16 it does not render very well at that tile size on any GPU.

Troubleshooting the different results we saw in the Blender 1.0Beta2 benchmark versus the Blender 2.80 beta and finally Blender 2.80 final has held up this review for several weeks and we’ve swapped through several AMD drivers while working on it. All of our Blender 2.80 results were, therefore, run using Adrenaline 2019 Edition 19.8.1.

Test Setup and Notes

All GPUs were tested on an Intel Core i7-8086K system using an Asus Prime Z370-A motherboard. The Vega 64, Radeon RX 5700 XT, and Radeon VII were all tested using Adrenalin 2019 Edition 19.7.2 (7/16/2019) for everything but Blender 2.80. All Blender 2.80 tests were run using 19.8.1, not 19.7.2. The Nvidia GeForce GTX 1080 and Gigabyte Aorus RTX 2080 were both tested using Nvidia’s 431.60 Game Ready Driver (7/23/2019).

CompuBench 2.0 runs GPUs through a series of tests intended to measure various aspects of their compute performance. Kishonti, developers of CompuBench, don’t appear to offer any significant breakdown on how they’ve designed their tests, however. Level set simulation may refer to using level sets for the analysis of surfaces and shapes. Catmull-Clark Subdivision is a technique used to create smooth surfaces. N-body simulations are simulations of dynamic particle systems under the influence of forces like gravity. TV-L1 optical flow is an implementation of an optical flow estimation method, used in computer vision.

SPEC Workstation 3.1 contains many of the same workloads as SPECViewPerf, but also has additional GPU compute workloads, which we’ll break out separately. A complete breakdown of the workstation test and its application suite can be found here. SPEC Workstation 3.1 was run in its 4K native test mode. While this test run was not submitted to SPEC for formal publication, our testing of SPEC Workstation 3.1 obeyed the organization’s stated rules for testing, which can be found here.

Nvidia GPUsSEEAMAZON_ET_135 See Amazon ET commerce were always tested with CUDA when CUDA was available.

We’ve cooked up two sets of results for you — a synthetic series of benchmarks, created with SiSoft Sandra and investigating various aspects of how these chips compare, including processing power, memory latency, and internal characteristics, and a wider suite of tests that touch on compute and rendering performance in various applications. Since the SiSoft Sandra 2020 tests are all unique to that application, we’ve opted to break them out into their own slideshow.

The Gigabyte Aorus RTX 2080 results should be read as approximately equivalent to an RTX 2070S. The two GPUs perform nearly identically in consumer workloads and should match each other in workstation as well.

SiSoft Sandra 2020

SiSoft Sandra is a general-purpose system information utility and full-featured performance evaluation suite. While it’s a synthetic test, it’s probably the most full-featured synthetic evaluation utility available, and Adrian Silasi, its developer, has spent decades refining and improving it, adding new features and tests as CPUs and GPUs evolve.

Our SiSoft Sandra-specific results are below. Some of our OpenCL results are a little odd where the 5700 XT is concerned, but according to Adrian, he’s not yet had the chance to optimize code for execution on the 5700 XT. Consider these results to be preliminary — interesting, but perhaps not yet indicative — as far as that GPU is concerned.

Our SiSoft Sandra 2020 benchmarks point largely in the same direction. If you need double-precision floating-point, the Radeon VII is a compute monster. While it’s not clear how many buyers fall into that category, there are certain places, like image processing and high-precision workloads, where the Radeon VII shines.

The RDNA-based Radeon 5700 XT does less to distinguish itself in these tests, but we’re also in contact with Silasi concerning the issues we ran into during testing. Improved support may change some of these results in months ahead.

Test Results

Now that we’ve addressed Sandra performance, let’s turn to the rest of our benchmark suite. Our other results are included in the slideshow below:

Conclusions

What do these results tell us? A lot of rather interesting things. First of all, RDNA is downright impressive. Keep in mind that we’ve tested this GPU in professional and compute-oriented applications, none of which have been updated or patched to run on it. There are clear signs that this has impacted our benchmark results, including some tests that either wouldn’t run or it ran slowly. Even so, the 5700 XT impresses.

Radeon VII impresses too, but in different ways than the 5700 XT. SiSoft Sandra 2020 shows the advantage this card can bring to double-precision workloads, where it offers far more performance than anything else on the market. AI and machine learning have become much more important of late, but if you’re working in an area where GPU double-precision is key, Radeon VII packs an awful lot of firepower. SiSoft Sandra does include tests that rely on D3D11 rather than OpenCL. But given that OpenCL is the chief competitor to CUDA, I opted to stick with it in all cases save for the memory latency tests, which globally showed lower latencies for all GPUs when D3D was used compared with OpenCL.

AMD has previously said that it intends to keep GCN in-market for compute, with Navi oriented towards the consumer market, but there’s no indication that the firm intends to continue evolving GCN on a separate trajectory from RDNA. The more likely meaning of this is that GCN won’t be replaced at the top of the compute market until Big Navi is ready at some point in 2020. Based on what we’ve seen, there’s a lot to be excited about on that front. There are already applications where RDNA is significantly faster than Radeon VII, despite the vast difference between the cards in terms of double-precision capability, RAM bandwidth, and memory capacity.

Blender 2.80 presents an interesting series of comparisons between RDNA, GCN, and CUDA. Using higher tile sizes has an enormous impact on GPU performance, but whether that difference is good or bad depends on which brand of GPU you use and which architectural family it belongs to. Pascal and Turing GPUs performed better with smaller tile sizes, while GCN GPUs performed better with larger ones. The 512×512 tile size was better in total for all GPUs, but only because it improved the total rendering time on Barbershop_Interior by more than it harmed the render time of every other scene for Turing and Pascal GPUs. The RTX 2080 was the fastest GPU in our Blender benchmarks, but the 5700 XT put up excellent performance results overall.

I do not want to make global pronouncements about Blender 2.80 settings; I am not a 3D rendering expert. These test results suggest that Blender performs better with larger tile settings on AMD GPUs but that smaller tile settings may produce better results for Nvidia GPUs. In the past, both AMD and Nvidia GPUs have benefited from larger tile sizes. This pattern could also be linked to the specific scenes in question, however. If you run Blender, I suggest experimenting with different scenes and tile sizes.

Ultimately, what these results suggest is that there’s more variation in GPU performance in some of these professional markets than we might expect for gaming. There are specific tests where the 5700 XT is markedly faster than the RTX 2080 or Radeon VII and other tests where it falls sharply behind them. OpenCL driver immaturity may account for some of this, but we see flashes of brilliance in these performance figures. The Radeon VII’s double-precision performance put it in a class of its own in certain respects, but the Radeon RX 5700 XT is a far less expensive and quieter card. Depending on what your target application is, AMD’s new $400 GPU might be the best choice on the market. In other scenarios, both the Radeon VII and the RTX 2080 make specific and particular claim to being the fastest card available.

Feature image is the final render of the Benchmark_Pavilion scene included in the Blender 1.02beta standalone benchmark. 

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Why 110-Degree Temps Are Normal for AMD’s Radeon 5700, 5700 XT


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has published a blog post discussing how temperatures and thermals are calculated on its Navi GPUs. There has been some concern in the enthusiast community about the temperatures posted by reference cards, given that these GPUs can report thermal junction temps of up to 110 degrees Celsius. This is substantially hotter than the old temperature of 95 C, which used to be treated as a thermal trip point.

Beginning with Radeon VII, AMD made significant changes to how it measures temperature across the GPU die. In the past, AMD writes, “the GPU core temperature was read by a single sensor that was placed in the vicinity of the legacy thermal diode.” That single reading was used to make decisions governing the GPUs voltage and operating frequency. Radeon VII and now Navi do things differently. Instead of deploying a single sensor, they use a network of sensor data gathered from across the GPU. AMD has deployed the same AVFS (Adaptive Voltage and Frequency Scaling) strategy that it uses for Ryzen to maximize performance of its GPUs.

AVFS deploys a network of on-die sensors across the entire chip rather than relying on a single point of measurement. Rather than calibrating voltages and frequencies at the factory and preprogramming a series of defined voltage and frequency steps that all CPUs must achieve, AVFS dynamically measures and delivers the voltage required for each individual CPU to hit its desired clock frequencies. This allows for finer-grained power management across the CPU, improving both performance and power efficiency across a range of targets.

The 110-degree junction temperature is not evidence of a problem or a sudden issue with AMD graphics cards.SEEAMAZON_ET_135 See Amazon ET commerce AMD now measures its GPU temperature in new locations and reports additional data points that capture this information because it adopted more sophisticated measuring methods. Arguing that the company should be penalized for reporting data more accurately is akin to arguing that manufacturers ought to hide data because they’re afraid some customers won’t understand it or put it in the proper context.

AMD provides a pair of graphs to illustrate the difference between its Vega 64 and earlier measurement system and how it calibrates voltage on the 5700 XT today. The old discrete state method is shown below:

Vega64-DPM-States

Now, compare that against the frequency/voltage curve for the 5700 XT.

Fine-Grained-DPM

The 5700 XT is designed to continue boosting performance until it hits its thermal junction threshold. From the company’s blog post:

Paired with this array of sensors is the ability to identify the ‘hotspot’ across the GPU die. Instead of setting a conservative, ‘worst case’ throttling temperature for the entire die, the Radeon RX 5700 series GPUs will continue to opportunistically and aggressively ramp clocks until any one of the many available sensors hits the ‘hotspot’ or ‘Junction’ temperature of 110 degrees Celsius. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec. This enables the Radeon RX 5700 series GPUs to offer much higher performance and clocks out of the box, while maintaining acoustic and reliability targets.

There’s a certain knee-jerk “I don’t want 110-degree anything in my case!” reaction from enthusiasts that’s both perfectly understandable and somewhat misguided. There’s an unconscious underlying assumption that 110 degrees Celsius represents a dangerous temperature (it doesn’t) or an extremely loud cooler. The 5700 XT and 5700 are much quieter than Vega 64, but if that’s still too loud, third-party cards are starting to hit the market. Companies like Asus were able to build coolers that handled the R9 290X beautifully, so the 5700 XT should be tamable as well.

Higher temperatures are partially an artifact of better measurement. They’re also a reality of advanced silicon manufacturing nodes. Our ability to pack transistors closer together has outstripped our ability to reduce their power consumption by cutting operating voltages. As a result, increasing transistor density increases hot spot formation and higher peak temperatures. AVFS helps mitigate this tendency by ensuring that operating voltage is precisely mapped to frequency, but it can’t fix the fact that AMD has packed more transistors into a smaller space, leading to higher thermal density.

Higher temperatures are not an intrinsic reason to be concerned about a product provided the manufacturer certifies that this is expected behavior. When I got into computing, a CPU temperature of 50 C (measured via in-socket thermistor) was considered extremely high. Today, Intel and AMD build silicon that can operate reliably at 95C or above for years at a time.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

No, AMD Hasn’t Quit Making Reference 5700 and 5700 XT GPUs


This site may earn affiliate commissions from the links on this page. Terms of use.

There’s an odd rumor going around that AMD has killed off its reference RX 5700 and RX 5700 XT GPU designs, or that it intends to do so once AIB’s custom cards are in-market. It started with French site Cowcatland, which ran the following headline:

CowCotLand

The translation of that headline states that AMD’s reference GPUs for the 5700 and 5700 XTSEEAMAZON_ET_135 See Amazon ET commerce have both reached EOL status only five weeks post-launch. It’s not true. According to AMD, the goal and point here are not to compete with AIB partners. “We expect there will continue to be strong supply of Radeon RX 5700 series graphics cards in the market, with multiple designs starting to arrive from our AIB partners,” AMD said. “As is standard practice, once the inventory of the AMD reference cards has been sold, AMD will continue to support new partner designs with Radeon RX 5700 series reference design kit.”

AMD provides reference designs for AIBs that want to speed cards to market without designing their own reference coolers or graphics boards. Early boards are typically based on these reference products. The delay between AIB shipments and reference card availability can be relatively short or can lag for some weeks. Some fans are unhappy that it’s been five weeks at this point without AIB designs, though we’ve seen this happen with Nvidia launches as well in the past. AMD isn’t killing off its reference cards, and they’ll still be manufactured going forward.

The enthusiast community isn’t particularly happy with the delay in blower cards or the fact that these cards are blowers, or the fact that the 5700 and 5700 XT remain noisier than equivalent Nvidia GPUs. The hope, therefore, is that dual or triaxial fan coolers will provide better acoustics than AMD’s default reference designs. This is, generally speaking, a pretty good bet.

Having tested the 5700, 5700 XT, Vega 64, Radeon VII, and an associated mixture of 2060, 2070, 2080, and 2080 Ti parts (both made by Nvidia and not), I’d say that honestly, the battle over a blower versus an open-air cooler can be a little inflated. Thermally, there’s an obvious difference between the two solutions (blowers exhaust hot air, while open-air coolers just move it around inside the chassis). What that difference means for your system depends a lot on what your system preconditions are. Open-air coolers can offer higher-performance in roomy cases with good airflow, while blowers provide more consistent results. The relative volume of the two solutions depends on their cooler design. A blower can be louder than an open-air cooler or vice-versa. The 5700 XT (a blower) is far quieter than Vega 64 (another blower). Vega 64 and the Radeon VII (an open-air design) have very similar noise profiles.

One interesting thing about reviews of Navi, however, is the degree to which the noise measurements from different review sites diverge. Anandtech, for example, reports that the 5700 XT is a 54dB(A) solution compared with 61dB for the Radeon Vega 64.

Image by Anandtech

This 54/61dB(A) solution seems to conform more closely to my own subjective experience of using the Radeon Vega 64, Radeon VII, 5700 XT, and associated Nvidia GPUs.SEEAMAZON_ET_135 See Amazon ET commerce The reason why I say this is because, to my own ear, the 5700 XT is vastly better than either the Radeon 64 or Radeon VII, both of which recall the Bad Old Days of loud GPUs like the R9 290X.

Other reviews, however, make very different claims:

Image by Guru3D

Guru3D claims that the Vega 64 and Radeon 5700 XT are identical in terms of db(A) and that the Radeon VII is significantly louder. Since distance from target obviously impacts noise measurements, I’m not concerned with the fact that Anandtech and Guru3D measure different levels of sound. What’s far more interesting is that one article shows Vega 64 and 5700 XT as comparable, while the other very much does not.

Image by TechPowerUp

TechPowerUp has a third distribution, with the 5700 XT and 5700 scoring identically and the Radeon VII below the Vega 64. Three well-regarded websites for tech reviews, three distinct results. Based on my own subjective experience, the one that “looks” the most correct is Anandtech’s — but noise measurements are going to be impacted by a number of factors, including relative levels of background noise, case-open testing versus case-closed, the distance from the target, and the equipment used to perform the test. It’s also possible that individual GPU variation is at work here as well.

In my own opinion, the 5700 and 5700 XT are firmly on the “Quiet enough” side of the “Is this GPU quiet enough to use or not?” It is not as quiet as the RTX 2060 or 2070 that we tested for the same review. It is considerably quieter than the Radeon VII or Vega 64. I have been known to wear earplugs when testing both of those cards in case-open configuration to avoid hearing damage, though the fact that I already have fan-related hearing damage in my left ear has also made me paranoid of harming it further. I’ve used a Vega 64 in my own system and disliked how noisy it was for gaming without headphones. The Radeon 5700 XT doesn’t cause the same issue.

Radeon AIB cards have often been quieter than the reference designs and so it’s likely this will continue to be the case. Whether these cards will offer reasonable values for the money is something we’ll check when they hit the market in larger quantities. Reference card designs will continue to exist alongside these newer cards as well.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something