AMD Overtakes Nvidia in Graphics Shipments for First Time in 5 Years


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD saw its share of the graphics market surge in Q2 2019, with total shipments larger than Nvidia for the first time in five years. At the same time, Nvidia retains a hard lock on the add-in board market for desktops, with approximately two-thirds of total market share. And while these gains are significant, it’s also worth considering why they didn’t drive any particular “pop” in AMD’s overall financial figures for Q2.

First, let’s talk about the total graphics market. There are three players here: Intel, AMD, and Nvidia. Because this report considers the totality of the graphics space, and 2/3 of systems ship without a separate GPU,SEEAMAZON_ET_135 See Amazon ET commerce both AMD and Nvidia are minority players in this market. AMD, however, has an advantage — it builds CPUs with an onboard graphics solution, like Intel. Nvidia does not. Thus, we have to acknowledge that the total market space includes companies with a very different suite of products:

Intel: Integrated-only (until next year), no discrete GPUs, but accounts for a majority of total shipments.
AMD: Integrated GPUs and discrete cards, but with very little presence in upper-end mobile gaming.
Nvidia: No integrated solutions. Discrete GPUs only.

Graphics-Market-Share-JPR

According to JPR, AMD’s shipments increased by 9.8 percent, Intel shipments fell by 1.4 percent, and Nvidia shipments were flat, at 0.04 percent. This jives with reports from early in the year, which suggested that AMD would take market share from Intel due to CPU shortages. Separately from its global report, JPR also publishes a separate document on the desktop add-in board (AIB) market. This report only considers the discrete GPU space between Nvidia and AMD (Intel will compete in this space when it launches Xe next year). AMD and Nvidia split this space — and again, AMD showed significant growth, with a ten percent improvement in market share.

Image by Jon Peddie Research

If you pay attention to financial reports, however, you may recall that AMD’s Q2 2019 sales results were reasonable, but not spectacular. Both companies reported year-on-year sales declines. Nvidia’s fiscal year Q2 2020 results, which the company reported a few weeks back, showed gaming revenue falling 27 percent year-on-year. AMD doesn’t break out GPU and CPU sales — it combines them both into a single category — but its combined Compute and Graphics revenue reports were lower on a yearly basis as well:

AMD-Financial-Q2-2019

During the first half of the year, AMD was thought to be gaining market share at Intel’s expense, but these gains were largely thought to be at the low-end of the market. AMD launched its first Chromebooks with old Carrizo APUs, for example. This explains the growth in unit shipments in the total GPU space, as well as why the company didn’t show a tremendous profit from its gains. Growth in the AIB market may be explained by the sale of GPUs like the RX 570. This card has consistently been an incredibly good value — Nvidia didn’t bother distributing review GPUs for the GTX 1650 because the RX 570 is decisively faster, according to multiple reviews. But GPU sales have been down overall. According to JPR, AIB sales fell 16.6 percent quarter-to-quarter, and 39.7 percent year-on-year.

This explains why AMD’s strong market share gains didn’t translate to improved C&G sales revenue. The company earns less revenue on low-end sales compared with high-end cards. And its market share improvements have been overshadowed by a huge decline in AIB sales year-on-year, likely due to the combination of lingering crypto hangover and a weak overall enthusiast market in Q2.

Q3 will be a much more significant quarter for both companies. Not only does it typically improve on the basis of seasonality alone, but both Nvidia and AMD introduced price cuts and new products. AMD’s Navi powers the excellent 5700 and 5700 XT, which are both faster than the Nvidia refreshes of the RTX 2060 and RTX 2070 (now dubbed the RTX 2060 Super and RTX 2070 Super, respectively). Nvidia, in turn, offers ray tracing and variable rate shading — two features that are used in very few games today but may become more popular in the future. AMD lacks these features.

The two companies have staked out opposing strategies for boosting their respective market share. It’ll be interesting to see how consumers do or don’t respond to their separate value propositions.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

New 3DMark Benchmark Shows the Performance Impact of Variable Rate Shading


This site may earn affiliate commissions from the links on this page. Terms of use.

One of the new features baked into DirectX 12 is support for variable-rate shading, also known as coarse-grained shading. The idea behind variable-rate shading is simple: In the vast majority of 3D games, the player doesn’t pay equal attention to everything on-screen. As far as the GPU is concerned, however, every pixel on-screen is typically shaded at the same rate. VRS / CGS allows the shader work being done for a single pixel to be scaled across larger groups of pixels; Intel demoed this feature during its Architecture Day last year, showing off a 2×2 as well as a 4×4 grid block.

In a blog post explaining the topic, Microsoft writes:

VRS allows developers to selectively reduce the shading rate in areas of the frame where it won’t affect visual quality, letting them gain extra performance in their games. This is really exciting, because extra perf means increased framerates and lower-spec’d hardware being able to run better games than ever before.

VRS also lets developers do the opposite: using an increased shading rate only in areas where it matters most, meaning even better visual quality in games.

VRS is a trick in a long line of tricks intended to help developers focus GPU horsepower where they need it most. It’s the sort of technique that’s going to become ever more important as Moore’s law slows down and it becomes harder and harder to wring more horsepower out of GPUsSEEAMAZON_ET_135 See Amazon ET commerce from process-node advances. 3DMark recently added a new benchmark to show the impact of VRS.

First, here’s a comparison of what the feature looks like enabled versus disabled.

VRS Disabled. Image provided by UL. Click to enlarge.

VRS Enabled. Image provided by UL. Click to enlarge.

There’s also a video of the effect in action, which gives you an idea of how it looks in motion.

As for the performance impact, Hot Hardware recently took the feature for a spin on Intel’s 10th Generation GPUs. Performance improvement from activating this feature was ~40 percent.

Data by Hot Hardware

These gains are not unique to Intel. HH also tested multiple Nvidia GPUs and saw strong gains for those cards as well. Unfortunately, VRS is currently confined to Nvidia and Intel-only — AMD does not support the capability and may not have the ability to activate it in current versions of Navi.

Elements in red receive full shading. Elements in green receive variable shading.

It always takes time to build support for features like this, so lacking an option at debut is not necessarily a critical problem. At the same time, however, features that save GPU rendering horsepower by reducing the impact of using various features tend to be popular among developers. It can help games run on lower-power solutions and in form factors that they might not otherwise support. All of rasterization is basically tricks to model what the real world looks like without actually having to render one, and choosing where to spend one’s resources to maximize performance is an efficiency boosting trick developers love. Right now, support is limited to a few architectures — Turing and Intel Gen 11 integrated — but that will change in time.

VRS isn’t currently used by any games, but Firaxis has demoed the effect in Civilization VI, implying that support might come to that title at some point. The new VRS benchmark is a free update to 3DMark Advanced or Professional Edition if you own those versions, but is not currently included in the free Basic edition.

The top image for this article is the VRS On screenshot provided by UL. Did you notice? Fun to check either way. 

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

RTX 2080 vs. Radeon VII vs. 5700 XT: Rendering and Compute Performance


Most of our GPU coverage focuses on the consumer side of the business and on game benchmarking, but I promised to examine the compute side of performance back when the Radeon VII launched. With the 5700 XT having debuted recently, we had an opportunity to return to this question with a new GPU architecture from AMD and compare RDNA against GCN.

In fact, the overall compute situation is at an interesting crossroads. AMD has declared that it wishes to be a more serious player in enterprise compute environments but has also said that GCN will continue to exist alongside RDNA in this space. The Radeon VII is a consumer variant of AMD’s MI50 accelerator, with half-speed FP64 support. If you know you need double-precision FP64 compute, for example, the Radeon VII fills that niche in a way that no other GPU in this comparison does.

AMD-versus-Nvidia-Chart

The Radeon VII has the highest RAM bandwidth and it’s the only GPU in this comparison to offer much in the way of double-precision performance. But while these GPUs have relatively similar on-paper specs, there’s significant variance between them in terms of performance — and the numbers don’t always break the way you think they would.

One of AMD’s major talking points with the 5700 XTSEEAMAZON_ET_135 See Amazon ET commerce is now Navi represents a fundamentally new GPU architecture. The 5700 XT proved itself to be moderately faster than the Vega 64 in our testing on the consumer side of the equation, but we wanted to check the situation in compute as well. Keep in mind, however, that the 5700 XT’s newness also works against us a bit here. Some applications may need to be updated to take full advantage of its capabilities.

Regarding Blender 2.80

Our test results contain data from both Blender 2.80 and the standalone Blender benchmark, 1.0beta2 (released August 2018). Blender 2.80 is a major release for the application, and it contains a number of significant changes. The standalone benchmark is not compatible with Nvidia’s RTX family, which necessitated testing with the latest version of the software. Initially, we tested the Blender 2.80 beta, but then the final version dropped — so we dumped the beta results and retested.

Image by Blender

There are significant performance differences between the Blender 1.0beta2 benchmark and 2.80 and one scene, Classroom, does not render properly in the new version. This scene has been dropped from our 2.80 comparisons. Blender allows the user to specify a tile size in pixels to control how much of the scene is worked on at once. Code in the Blender 1.0beta2 benchmark’s Python files indicates that the test uses a tile size of 512×512 (X/Y coordinates) for GPUs and 16×16 for CPUs. Most of the scene files actually contained within the benchmark, however, actually use a tile size of 32×32 by default if loaded within Blender 2.80.

We tested Blender 2.80 in two different modes. First, we tested all compatible scenes using the default tile size those scenes loaded with. This was 16×16 for Barbershop_Interior, and 32×32 for all other scenes. Next, we tested the same renders with a default tile size of 512×512. Up until now, the rule with tile sizes has been that larger sizes were good for GPUs, while smaller sizes were good for CPUs. This appears to have changed somewhat with Blender 2.80. AMD and Nvidia GPUs show very different responses to larger tile sizes, with AMD GPUs accelerating with higher tile sizes and Nvidia GPUs losing performance.

Because the scene files we are testing were created in an older version of Blender, it’s possible that this might be impacting our overall results. We have worked extensively with AMD for several weeks to explore aspects of Blender performance on GCN GPUs. GCN, Pascal, Turing, and RDNA all show a different pattern of results when moving from 32×32 to 512×512, with Turing losing less performance than Pascal and RDNA gaining more performance in most circumstances than GCN.

All of our GPUs benefited substantially from not using a 16×16 tile size for Barbershop_Interior. While this test defaults to 16×16 it does not render very well at that tile size on any GPU.

Troubleshooting the different results we saw in the Blender 1.0Beta2 benchmark versus the Blender 2.80 beta and finally Blender 2.80 final has held up this review for several weeks and we’ve swapped through several AMD drivers while working on it. All of our Blender 2.80 results were, therefore, run using Adrenaline 2019 Edition 19.8.1.

Test Setup and Notes

All GPUs were tested on an Intel Core i7-8086K system using an Asus Prime Z370-A motherboard. The Vega 64, Radeon RX 5700 XT, and Radeon VII were all tested using Adrenalin 2019 Edition 19.7.2 (7/16/2019) for everything but Blender 2.80. All Blender 2.80 tests were run using 19.8.1, not 19.7.2. The Nvidia GeForce GTX 1080 and Gigabyte Aorus RTX 2080 were both tested using Nvidia’s 431.60 Game Ready Driver (7/23/2019).

CompuBench 2.0 runs GPUs through a series of tests intended to measure various aspects of their compute performance. Kishonti, developers of CompuBench, don’t appear to offer any significant breakdown on how they’ve designed their tests, however. Level set simulation may refer to using level sets for the analysis of surfaces and shapes. Catmull-Clark Subdivision is a technique used to create smooth surfaces. N-body simulations are simulations of dynamic particle systems under the influence of forces like gravity. TV-L1 optical flow is an implementation of an optical flow estimation method, used in computer vision.

SPEC Workstation 3.1 contains many of the same workloads as SPECViewPerf, but also has additional GPU compute workloads, which we’ll break out separately. A complete breakdown of the workstation test and its application suite can be found here. SPEC Workstation 3.1 was run in its 4K native test mode. While this test run was not submitted to SPEC for formal publication, our testing of SPEC Workstation 3.1 obeyed the organization’s stated rules for testing, which can be found here.

Nvidia GPUsSEEAMAZON_ET_135 See Amazon ET commerce were always tested with CUDA when CUDA was available.

We’ve cooked up two sets of results for you — a synthetic series of benchmarks, created with SiSoft Sandra and investigating various aspects of how these chips compare, including processing power, memory latency, and internal characteristics, and a wider suite of tests that touch on compute and rendering performance in various applications. Since the SiSoft Sandra 2020 tests are all unique to that application, we’ve opted to break them out into their own slideshow.

The Gigabyte Aorus RTX 2080 results should be read as approximately equivalent to an RTX 2070S. The two GPUs perform nearly identically in consumer workloads and should match each other in workstation as well.

SiSoft Sandra 2020

SiSoft Sandra is a general-purpose system information utility and full-featured performance evaluation suite. While it’s a synthetic test, it’s probably the most full-featured synthetic evaluation utility available, and Adrian Silasi, its developer, has spent decades refining and improving it, adding new features and tests as CPUs and GPUs evolve.

Our SiSoft Sandra-specific results are below. Some of our OpenCL results are a little odd where the 5700 XT is concerned, but according to Adrian, he’s not yet had the chance to optimize code for execution on the 5700 XT. Consider these results to be preliminary — interesting, but perhaps not yet indicative — as far as that GPU is concerned.

Our SiSoft Sandra 2020 benchmarks point largely in the same direction. If you need double-precision floating-point, the Radeon VII is a compute monster. While it’s not clear how many buyers fall into that category, there are certain places, like image processing and high-precision workloads, where the Radeon VII shines.

The RDNA-based Radeon 5700 XT does less to distinguish itself in these tests, but we’re also in contact with Silasi concerning the issues we ran into during testing. Improved support may change some of these results in months ahead.

Test Results

Now that we’ve addressed Sandra performance, let’s turn to the rest of our benchmark suite. Our other results are included in the slideshow below:

Conclusions

What do these results tell us? A lot of rather interesting things. First of all, RDNA is downright impressive. Keep in mind that we’ve tested this GPU in professional and compute-oriented applications, none of which have been updated or patched to run on it. There are clear signs that this has impacted our benchmark results, including some tests that either wouldn’t run or it ran slowly. Even so, the 5700 XT impresses.

Radeon VII impresses too, but in different ways than the 5700 XT. SiSoft Sandra 2020 shows the advantage this card can bring to double-precision workloads, where it offers far more performance than anything else on the market. AI and machine learning have become much more important of late, but if you’re working in an area where GPU double-precision is key, Radeon VII packs an awful lot of firepower. SiSoft Sandra does include tests that rely on D3D11 rather than OpenCL. But given that OpenCL is the chief competitor to CUDA, I opted to stick with it in all cases save for the memory latency tests, which globally showed lower latencies for all GPUs when D3D was used compared with OpenCL.

AMD has previously said that it intends to keep GCN in-market for compute, with Navi oriented towards the consumer market, but there’s no indication that the firm intends to continue evolving GCN on a separate trajectory from RDNA. The more likely meaning of this is that GCN won’t be replaced at the top of the compute market until Big Navi is ready at some point in 2020. Based on what we’ve seen, there’s a lot to be excited about on that front. There are already applications where RDNA is significantly faster than Radeon VII, despite the vast difference between the cards in terms of double-precision capability, RAM bandwidth, and memory capacity.

Blender 2.80 presents an interesting series of comparisons between RDNA, GCN, and CUDA. Using higher tile sizes has an enormous impact on GPU performance, but whether that difference is good or bad depends on which brand of GPU you use and which architectural family it belongs to. Pascal and Turing GPUs performed better with smaller tile sizes, while GCN GPUs performed better with larger ones. The 512×512 tile size was better in total for all GPUs, but only because it improved the total rendering time on Barbershop_Interior by more than it harmed the render time of every other scene for Turing and Pascal GPUs. The RTX 2080 was the fastest GPU in our Blender benchmarks, but the 5700 XT put up excellent performance results overall.

I do not want to make global pronouncements about Blender 2.80 settings; I am not a 3D rendering expert. These test results suggest that Blender performs better with larger tile settings on AMD GPUs but that smaller tile settings may produce better results for Nvidia GPUs. In the past, both AMD and Nvidia GPUs have benefited from larger tile sizes. This pattern could also be linked to the specific scenes in question, however. If you run Blender, I suggest experimenting with different scenes and tile sizes.

Ultimately, what these results suggest is that there’s more variation in GPU performance in some of these professional markets than we might expect for gaming. There are specific tests where the 5700 XT is markedly faster than the RTX 2080 or Radeon VII and other tests where it falls sharply behind them. OpenCL driver immaturity may account for some of this, but we see flashes of brilliance in these performance figures. The Radeon VII’s double-precision performance put it in a class of its own in certain respects, but the Radeon RX 5700 XT is a far less expensive and quieter card. Depending on what your target application is, AMD’s new $400 GPU might be the best choice on the market. In other scenarios, both the Radeon VII and the RTX 2080 make specific and particular claim to being the fastest card available.

Feature image is the final render of the Benchmark_Pavilion scene included in the Blender 1.02beta standalone benchmark. 

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Microsoft Makes It Easier to Bring DirectX 12 Games to Windows 7


This site may earn affiliate commissions from the links on this page. Terms of use.

When Microsoft launched Windows 10, it made its stance on DirectX 12 clear: Windows 10 would be the only OS that supported the company’s latest API, period. For years, the company stuck to this stance. Then, earlier this year, Microsoft announced that one game — World of Warcraft — would be allowed to take advantage of the DX12 API while running Windows 7.

The reason for this allowance? Probably China. World of Warcraft has always had a huge Chinese following, and Blizzard’s decision to add DX12 support to WoW was a significant step for both the developer and the API. Now, Microsoft has announced that it’s expanding this program. In a short blog post pointing an array of API documents, Microsoft notes:

We have received warm welcome from the gaming community, and we continued to work with several game studios to further evaluate this work. To better support game developers at larger scales, we are publishing the following resources to allow game developers to run their DirectX 12 games on Windows 7.

The development guidance document for how to move DX12 to Windows 7 actually contains some useful information on how difficult it is to get games running under the older OS and what the differences are between the two. Microsoft states:

We only ported the D3D12 runtime to Windows 7. Therefore, the difference of Graphics Kernel found on Windows 7 still requires some game code changes, mainly around the presentation code path, use of monitored fences, and memory residency management (all of which will be detailed below). Early adopters reported from a few days to two weeks of work to have their D3D12 games up and running on Windows 7, though the actual engineering work required for your game may vary.

There are technical differences between DX12 on Windows 7 and DX12 on Windows 10. DirectML (Direct Machine Learning) is not supported under Windows 7, but all other features implemented in the October 2018 Windows 10 update are supported. There are differences in terms of API usage (D3D12 on Windows 7 uses different Present APIs), and some fence usage patterns are also unsupported.

There are, however, some limits to support. Only 64-bit Windows 7 with SP1 installed is supported. There’s no PIX or D3D12 debug layer on Windows 7, no shared surfaces or cross-API interop, no SLI/LDA support, no D3D12 video, and no WARP support. According to Microsoft, “HDR support is orthogonal to D3D12 and requires DXGI/Kernel/DWM functionalities on Windows 10 but not on Windows 7.” This seems to imply that HDR content can work in Windows 7, but it may be on the developer to implement it properly.

Microsoft has published additional resources on the topic, including a NuGet package and a D3D12 code sample that runs on Windows 7 and 10 with the same binary.

Why Make DX12 More Accessible?

This is honestly a little surprising to see. Windows 7 is supposed to be headed for firm retirement in a matter of months. The implication here is that Microsoft is taking this step to cater to gamers that are still using Windows 7, but the Steam Hardware Survey suggests that’s a distinct minority of gamers. Windows 10 has a 71.57 percent market share according to the SHS, while Windows 7 64-bit is pegged at 20.4 percent. What’s interesting here is that the SHS actually tilts much more towards Windows 10 than a generic OS survey.

Chinese-Desktop-Market-Share

StatCounter data puts Windows 10 at 58.63 percent of the market as of July, compared with 31.22 percent of Windows 10. This suggests that gamers tend to update their hardware more quickly than the mass market, which makes sense. But from what we’ve read, the Windows 7 gamers may be concentrated in China, where it remains the most popular OS. 49.46 percent of Chinese gamers are using Windows 7, compared with just 41.13 percent of PC gamingSEEAMAZON_ET_135 See Amazon ET commerce occurring under Windows 10. Even if we assume Chinese gamers are more likely to be using Windows 10 — and it’s not clear they are — there’s still a much larger share of users in that nation.

It’s not clear at all how Microsoft is going to deal with that problem as it relates to overall support, but it could be that this is Microsoft’s way of providing a certain degree of backward-compatibility without being willing to do anything equivalent as far as continuing to provide security features. Microsoft wants its customer base — all of it — to be Windows 10. It’s surprising to see the company extending DX12 backward, but we’d be stunned if they granted Windows 7 a stay of reprieve and kept publishing patches for it.

MS could also be hoping to encourage devs to adopt DX12 more widely. Three years after debut, neither DX12 nor Vulkan has done much to revolutionize APIs or gaming. Developers do use the APIs, but we’ve seen comparatively little use of them to pull off anything unique. The need to support older hardware and a wide range of users, plus the fact that these APIs require developers to be more familiar with the underlying hardware, seems to be a drag on their overall usage.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Chinese Vendor Designs PCIe 4.0 GPU, Targets GTX 1080 Performance


This site may earn affiliate commissions from the links on this page. Terms of use.

The high-performance GPU industry has been a two-horse race for very nearly two decades. After the collapse of 3dfx, no new company emerged to seriously challenge the ATI/Nvidia split. While Intel holds a substantial stake of the total GPU market, its integrated business has only focused on 2D, video, and basic gaming 3D. Intel’s upcoming Xe architecture, expected in 2020, will take a serious shot at breaking into the consumer space. Now, there’s a word of a potential fourth player in the field, albeit it possibly in a more specialized area.

According to THG, Jingjia Micro is a military-civilian integrated company that’s primarily focused on developing GPUsSEEAMAZON_ET_135 See Amazon ET commerce for the military market thus far. The company began by building China’s first homegrown GPU, the JM5400, built on 65nm. The success of the JM5400 allowed the company to expand and move to newer manufacturing nodes. Its next products, the JM7000 and 7200, were built on 28nm. Now, Jingjia Micro wants to expand its reach further and target the performance of the GTX 1050 and 1080 with a pair of new designs — the JM9231 and JM9271.

Jingjia-Micro-Chart

Chart by THG

A post at cnbeta has additional information. Currently, the JM7200 is said to offer performance equivalent to the GeForce GT 640, albeit in a much lower power envelope — 10W, supposedly, compared with the 50W Nvidia specced for that card. We’d like to see that claim independently verified. The OEM variant of the GT 640 was a Fermi-based part built on 40nm, but that chip had a 65W TDP. The 50W variant was a Kepler-derived part built on 28nm — the same process node Jingjia Micro uses. The JM part also supposedly has 4GB of RAM, while the GT 640 50W version had just 1GB of GDDR5.

The JM9231 and JM9271 are supposedly the first fully programmable GPUs that Jingjia Micro has developed; there are references to the previous JM5400 and JM7200 families being based on fixed-function rendering pipelines. These limitations wouldn’t fly under modern APIs for Windows, but the company started life as a military GPU vendor, and such applications obviously have very different requirements for APIs and product certification.

The new JM parts obviously aren’t going to gun for the highest-end cards from Nvidia or AMD, but even approaching high-end performance from 2016 – 2017 would allow them to contend for the midrange and budget markets. Bringing up the software stack and winning developer support would obviously be critical to any market play, and there doesn’t seem to be any information about whether the JM9231 or JM9271 include any performance improvements or ideas that we haven’t seen before from the major vendors. Such events are rare, but not unheard of. PowerVR once attempted to establish itself as a third player in PC graphics with the Kyro and Kyro II, which won some market share for itself as a unique solution with higher memory bandwidth efficiency than either ATI or Nvidia.

The use of HBM memory in a product of this sort is rather interesting, as is the comparatively low memory bandwidth (by HBM standards). Given that both products lack modern API support, it’s possible they’re intended strictly for military use — though in that case, referencing the GTX 1080 would be a bit odd. Either way, China clearly has its eye on competing more aggressively in terms of overall silicon performance. A few more years, and we might see new products from vendors we haven’t seen before challenging ‘homegrown’ alternatives like AMD, Nvidia, and (if its Xe launch goes well), Intel.

Now Read: 




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Microsoft: Xbox Next Will Bring Faster Load Times, 60fps, Backward Compatibility


This site may earn affiliate commissions from the links on this page. Terms of use.

The next console generation is less than 18 months away, and Microsoft is starting to share a little more information about what it’s prioritizing for the next generation of Xbox consoles. Playability, load times, and backward compatibility for controllers and software are all top priorities for Redmond with the launch of Xbox Next.

“I think the area that we really want to focus on next-generation is frame rate and playability of the games,” Spencer told Gamespot:

Ensuring that the games load incredibly fast, ensuring that the game is running at the highest frame rate possible. We’re also the Windows company, so we see the work that goes on [for] PC and the work that developers are doing. People love 60 frames-per-second games, so getting games to run at 4K 60 [fps] I think will be a real design goal for us.

The thing that’s interesting is, this generation, we’ve really focused on 4K visuals and how we bring both movies through 4K Blu-ray and video streaming, and with Xbox One X allowing games to run at 4K visuals will make really strong visual enhancements next generation. But playability is probably the bigger focus for us this generation. How fast do [games] load? Do I feel like I can get into the game as fast as possible and while it’s playing? How does it feel? Does this game both look and feel like no other game that I’ve seen? That’s our target.”

This is more or less what ET predicted earlier this year. 60fps is a much more realistic target for the Xbox Next than the 240fps rumor that was going around. Despite various vague statements that the Xbox Next will support 8K, Spencer sensibly makes no mention of it as a gaming resolution target. There’s no chance a 2020 console will have a GPU powerful enough to support this resolution and we’re glad to see the company pivoting towards an emphasis on other aspects of gaming.

According to Microsoft, backward compatibility is a key pillar for Xbox moving forward. Xbox One, Xbox 360, and OG Xbox games will all continue to be supported on Xbox Next, Spencer told Gamespot. The company has promised that this backwards compatibility pledge extends to controllers as well, saying, “So really, the things that you’ve bought from us, whether the games or the controllers that you’re using, we want to make sure those are future compatible with the highest fidelity version of our console, which at that time will obviously be the one we’ve just launched.”

Will Microsoft Actually Push a 60fps Target?

Historically, there have been a handful of games that specifically targeted 60fps for console play, but it’s been an uncommon frame rate target. The Xbox One X and PS4 Pro expanded the list of titles that offered this frame rate by encouraging developers to release updates for new and existing games that would add new resolution options or the ability to play at higher frame rates than the base title supported. Actually moving the game industry (back) towards a 60 fps target, however, would be a feat.

There’s some reason to think both console manufacturers could pull it off. The Xbox Next and PlayStation 5 will both target performance levels above the existing Xbox One and PS4 Pro.SEEAMAZON_ET_135 See Amazon ET commerce The use of Ryzen and an RDNA-derived GPU for both platforms guarantees that the consoles will pack more performance, but the level of perceived visual quality improvement one console generation offers over the next has been shrinking every cycle. Instead of simply chasing improved levels of detail, Spencer wants developers to target smoothness and load times — two other objective areas where it’s possible to deliver major generational gains, particularly with SSDs being adopted for the first time.

Statista-TV-Market-Share

One major question is how the 1080p/4K split will be addressed. Spencer refers to a 4K/60fps target, but 1080p still accounts for a large percentage of TVs sold and the install base for the older standard is enormous. The simplest way for Microsoft to handle a 1080p output limit is to render internally at 4K and then output at 1080p. This effectively applies supersampled AA to the entire image and would deliver a substantial improvement in image quality over standard 1080p. With the PS4 Pro and Xbox One X, both Microsoft and Sony gave developers a variety of ways they could use the additional power of the newer consoles to punch up the base experience, and we expect a similar approach here. One of the advantages of having a powerful GPU paired with a lower-resolution display is that you can crank up secondary features like AA without worrying about the performance impact, and we’re hoping Microsoft brings some of that flexibility to its Xbox Next design.

The PC gamer in me can’t help noting that the already barely-there line between consoles and PCs will be even thinner next cycle. Consoles have provided backward compatibility before, but it’s often come up with qualifiers related to your hardware version and been limited to one previous platform. Microsoft isn’t just going to support Xbox One games on Xbox Next, it’ll continue supporting Xbox 360 and OG Xbox, as well as Xbox One peripherals. That’s exactly the kind of backward compatibility support we would expect when upgrading from PC build to the next and it’s nice to see consoles catching up after a few decades.

The flip side, of course, is that the console-versus-PC debate gets goofier every generation. At this point, you might as well just ask “controller or keyboard?” (keyboard, natch). Functionally, at the hardware level, we’re all gaming on PCs.

Now Read: 




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Why 110-Degree Temps Are Normal for AMD’s Radeon 5700, 5700 XT


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has published a blog post discussing how temperatures and thermals are calculated on its Navi GPUs. There has been some concern in the enthusiast community about the temperatures posted by reference cards, given that these GPUs can report thermal junction temps of up to 110 degrees Celsius. This is substantially hotter than the old temperature of 95 C, which used to be treated as a thermal trip point.

Beginning with Radeon VII, AMD made significant changes to how it measures temperature across the GPU die. In the past, AMD writes, “the GPU core temperature was read by a single sensor that was placed in the vicinity of the legacy thermal diode.” That single reading was used to make decisions governing the GPUs voltage and operating frequency. Radeon VII and now Navi do things differently. Instead of deploying a single sensor, they use a network of sensor data gathered from across the GPU. AMD has deployed the same AVFS (Adaptive Voltage and Frequency Scaling) strategy that it uses for Ryzen to maximize performance of its GPUs.

AVFS deploys a network of on-die sensors across the entire chip rather than relying on a single point of measurement. Rather than calibrating voltages and frequencies at the factory and preprogramming a series of defined voltage and frequency steps that all CPUs must achieve, AVFS dynamically measures and delivers the voltage required for each individual CPU to hit its desired clock frequencies. This allows for finer-grained power management across the CPU, improving both performance and power efficiency across a range of targets.

The 110-degree junction temperature is not evidence of a problem or a sudden issue with AMD graphics cards.SEEAMAZON_ET_135 See Amazon ET commerce AMD now measures its GPU temperature in new locations and reports additional data points that capture this information because it adopted more sophisticated measuring methods. Arguing that the company should be penalized for reporting data more accurately is akin to arguing that manufacturers ought to hide data because they’re afraid some customers won’t understand it or put it in the proper context.

AMD provides a pair of graphs to illustrate the difference between its Vega 64 and earlier measurement system and how it calibrates voltage on the 5700 XT today. The old discrete state method is shown below:

Vega64-DPM-States

Now, compare that against the frequency/voltage curve for the 5700 XT.

Fine-Grained-DPM

The 5700 XT is designed to continue boosting performance until it hits its thermal junction threshold. From the company’s blog post:

Paired with this array of sensors is the ability to identify the ‘hotspot’ across the GPU die. Instead of setting a conservative, ‘worst case’ throttling temperature for the entire die, the Radeon RX 5700 series GPUs will continue to opportunistically and aggressively ramp clocks until any one of the many available sensors hits the ‘hotspot’ or ‘Junction’ temperature of 110 degrees Celsius. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec. This enables the Radeon RX 5700 series GPUs to offer much higher performance and clocks out of the box, while maintaining acoustic and reliability targets.

There’s a certain knee-jerk “I don’t want 110-degree anything in my case!” reaction from enthusiasts that’s both perfectly understandable and somewhat misguided. There’s an unconscious underlying assumption that 110 degrees Celsius represents a dangerous temperature (it doesn’t) or an extremely loud cooler. The 5700 XT and 5700 are much quieter than Vega 64, but if that’s still too loud, third-party cards are starting to hit the market. Companies like Asus were able to build coolers that handled the R9 290X beautifully, so the 5700 XT should be tamable as well.

Higher temperatures are partially an artifact of better measurement. They’re also a reality of advanced silicon manufacturing nodes. Our ability to pack transistors closer together has outstripped our ability to reduce their power consumption by cutting operating voltages. As a result, increasing transistor density increases hot spot formation and higher peak temperatures. AVFS helps mitigate this tendency by ensuring that operating voltage is precisely mapped to frequency, but it can’t fix the fact that AMD has packed more transistors into a smaller space, leading to higher thermal density.

Higher temperatures are not an intrinsic reason to be concerned about a product provided the manufacturer certifies that this is expected behavior. When I got into computing, a CPU temperature of 50 C (measured via in-socket thermistor) was considered extremely high. Today, Intel and AMD build silicon that can operate reliably at 95C or above for years at a time.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

AMD, Nvidia High-End GPUs Are Much Better Deals Now Than 6 Months Ago


This site may earn affiliate commissions from the links on this page. Terms of use.

Back in February, I wrote a story about how AMD and Nvidia had collectively launched the least-appealing high-end GPU refresh cycle in the history of the gaming industry. After the launch of AMD’s Navi 5700 and 5700 XT, and Nvidia’s rejoinder with the RTX 2060 Super and 2070 Super, it makes sense to revisit that conclusion. How much have things improved, just over half a year later?

They’ve actually improved a lot if you’re buying at the upper end of the market. Before we examine the specifics of the changes, let me clarify some terms. Historically, GPU price brackets look something like this:

Budget: $150 or less.
Midrange: $150 – $300
High-End: $300 – $500
Ultra-High: $500+.

When Nvidia introduced the RTX family, it significantly raised prices. Instead of the GTX 1070 around $370 and the GTX 1080 at $500 – $550, the RTX 2070 was a $500 GPU, the RTX 2080 cost $700, and the 2080 Ti effectively ran $1,100 – $1,200 ($1,000 technically, but nobody ever had them for this, as far as I can tell).

There are two basic ways for a publication like ours to handle this: Hold its own price banding, and fit the new cards into it, or modify our price bands and raise them to accommodate the manufacturer. If you take the latter approach, AMD’s Navi GPUs are now “midrange” cards, despite carrying price tags of $350 and $400. This is also how you wind up with articles referring to the iPhone XR as “entry-level,” or “budget” at $750 as if Apple hadn’t just killed the only pseudo-budget device it offered, the $350 iPhone SE.

Adjusting price bands to reflect what companies are selling isn’t wrong, so long as it tracks with what customers are buying. Nvidia’s upcoming Q2 numbers should provide more confirmation here, but available data suggested Turing sales badly lagged Pascal at launch and may not have recovered since. If Nvidia truly thought it had established ray tracing as a feature gamers were willing to pay for, it wouldn’t have cut pricing on its RTX 2060, 2070, and 2080 GPUs at all.

So far as ExtremeTech is concerned, at least for now, the Navi 5700 and 5700 XT are high-end cards, as are the RTX 2060, 2060 Super, 2070, and 2070 Super. The RTX 2080, 2080 Super, and 2080 Ti belong to their own, separate category of ultra-high-end devices.

Evaluating the Improvement

We recently measured long-term performance evolution in a variety of GPUs, but we can use that data set for a different purpose. Keep in mind that in the graph series below, the GeForce RTX 2080 (non-Super) offers roughly identical performance to the RTX 2070 SuperSEEAMAZON_ET_135 See Amazon ET commerce (the 2070S is typically within 95 to 105 percent the performance of the RTX 2080).

Comparing RTX 2070S/2080 against GTX 1080, we see minimum frame rates are 1.18x higher at 1080p, 1.28x higher at 1440p, and 1.4x higher at 4K. Average frame rates across our entire suite of games are 1.3x higher at 1080p, 1.4x higher at 1440p, and 1.44x higher at 4K.

I don’t have the same level of data on the GTX 1070 to compare with the RTX 2060 Super, but we know that the 2060S improves performance by about 1.15x as well, that it performs nearly identically to the original RTX 2070, and that the GPU’s new $400 price point puts it closer to the original GTX 1070 than the OG 1080 price.

As for AMD, the 5700 and 5700 XTSEEAMAZON_ET_135 See Amazon ET commerce are effectively a replacement for the Vega 56 and Vega 64. The slideshow below contains the results from our RX 5700 and 5700 XT review. The Radeon RX 5700 matches Vega 64 in virtually every test, but costs $350 as opposed to $500. It draws 74 percent as much power while outperforming the RTX 2060.

As upgrades for existing Vega 56 and Vega 64 owners, the best case is going to be between Vega 56 and RX 5700 XT. In this case, I’m estimating the gains of doing so, but I’m fairly sure they aren’t as large as the improvements between Pascal and Turing at Turing’s adjusted prices. Vega 56 was typically 1.08x – 1.12x slower than Vega 64, but the 5700 XT’s lead over Vega 64 varies significantly depending on the game. In a few cases, the two GPUs are tied.

AMD gamers with older cards or Nvidia gamers looking to switch sides are the more likely customers for RX 5700 and RX 5700 XT, and the performance these cards offer makes them a potentially attractive upgrade in these markets.

A Significant Improvement

AMD’s new launches have restored a better, more consumer-friendly balance to the upper end of the GPU market. The ultra-high-end market remains less friendly. The RTX 2080 Super offers the smallest performance improvement of all the “Super” cards and does not do a very good job of justifying its $200 price premium over the RTX 2070 Super. Both the Radeon VII and RTX 2080 Super are only justifiable if you’re actually gaming in 4K, and honestly, they aren’t all that compelling even in that situation.

AMD has said nothing about its plans for the midrange market yet, but the company must be working on cards to refresh this space as well. Hopefully, it won’t be long before we have far more power-efficient, higher-performing chips ready to take the place of the RX 570, 580, and 590.

As for whether Navi or Turing is a better upgrade path, that’s going to depend a bit on what you want: A bit more speed (relative to the competition), or features like ray tracing? Some users may not feel that even these gains are sufficient, which I understand. But we can at least say that there are gains in performance/dollar relative to the previous generation. Six months ago, it wasn’t possible to make that claim.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something