Chinese Foundry SMIC Begins 14nm Production


This site may earn affiliate commissions from the links on this page. Terms of use.

One of the longstanding trends in semiconductor manufacturing has been a steady decrease in major foundry players. Twenty years ago, when 180nm manufacturing was cutting-edge technology, there were no fewer than 28 firms deploying the node. Today, there are three companies building 7nm technology — Samsung, TSMC, and Intel. A fourth, GlobalFoundries, has since quit the cutting-edge business to focus on specialty foundry technologies like its 22nm and 12nm FDX technology.

What sometimes gets lost in this discussion, however, is the existence of a secondary group of foundry companies that do deploy new nodes — just not at the cutting-edge of technological research. China’s Semiconductor Manufacturing International Corporation (SMIC) has announced that it will begin recognizing 14nm revenue from volume production by the end of 2019, a little more than five years after Intel began shipping on this node. TSMC, Samsung, and GlobalFoundries all have extensive 14nm capability in production, as does UMC, which introduced the node in 2017.

Secondary sources for a node, like UMC and SMIC, often aren’t captured in comparative manufacturing charts like the one below because the companies in question offer these nodes after they’ve been deployed as cutting-edge products by major foundries. In many cases, they’re tapped by smaller customers with products that don’t make news headlines.

FoundryManufacturing

SMIC, however, is something of a special case. SMIC is mainland China’s largest semiconductor manufacturer and builds chips ranging from 350nm to 14nm. The company has two factories with the ability to process 300mm wafers, but while moving to 14nm is a major part of China’s long-term semiconductor initiative, SMIC isn’t expected to have much 14nm capacity any time soon. The company’s high utilization rate (~94 percent) precludes it having much additional capacity to dedicate to 14nm production. SMIC is vital to China’s long-term manufacturing goals; the country’s “Made in China 2025” plan calls for 70 percent of its domestic semiconductor demand to come from local companies by 2025. Boosting production at SMIC and bringing new product lines online is vital to that goal. That distinguishes the company from a foundry like UMC, which has generally chosen not to compete with TSMC for leading-edge process nodes. SMIC wants that business — it just can’t compete for it yet.

Dr. Zhao Haijun and Dr. Liang Mong Song, SMIC’s Co-Chief Executive Officers released a statement on the company’s 14nm ramp, saying:

FinFET research and development continues to accelerate. Our 14nm is in risk production and is expected to contribute meaningful revenue by year-end. In addition, our second-generation FinFET N+1 has already begun customer engagement. We maintain long-term and steady cooperation with customers and clutch onto the opportunities emerging from 5G, IoT, automotive and other industry trends.

Currently, only 16 percent of the semiconductors used in China are built there, but the country is adding semiconductor production capacity faster than anywhere else on Earth. The company is investing in a $10B fab that will be used for dedicated 14nm production. SMIC is already installing equipment in the completed building, so production should ramp up in that facility in 2020. Once online, the company will have significantly more 14nm capacity at its disposal (major known customers of SMIC include HiSilicon and Qualcomm). Texas Instruments has built with the company in the past (it isn’t clear if it still does), as has Broadcom. TSMC and SMIC have gone through several rounds of litigation over IP misappropriation; both cases were settled out of court with substantial payments to TSMC.

Despite this spending, analysts do not expect SMIC to immediately catch up with major foundry players from other countries; analysts told CNBC it would take a decade for the firm to close the gap with other major players. Exact dimensions on SMIC’s 14nm node are unknown. Foundry nodes are defined by the individual company not by any overarching standard organization or in reference to any specific metric. Those looking for additional information on that topic will find it here.

Now Read: 




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

The renaissance of silicon will create industry giants – gpgmail


Every time we binge on Netflix or install a new internet-connected doorbell to our home, we’re adding to a tidal wave of data. In just 10 years, bandwidth consumption has increased 100 fold, and it will only grow as we layer on the demands of artificial intelligence, virtual reality, robotics and self-driving cars. According to Intel, a single robo car will generate 4 terabytes of data in 90 minutes of driving. That’s more than 3 billion times the amount of data people use chatting, watching videos and engaging in other internet pastimes over a similar period.

Tech companies have responded by building massive data centers full of servers. But growth in data consumption is outpacing even the most ambitious infrastructure build outs. The bottom line: We’re not going to meet the increasing demand for data processing by relying on the same technology that got us here.

The key to data processing is, of course, semiconductors, the transistor-filled chips that power today’s computing industry. For the last several decades, engineers have been able to squeeze more and more transistors onto smaller and smaller silicon wafers — an Intel chip today now squeezes more than 1 billion transistors on a millimeter-sized piece of silicon.

This trend is commonly known as Moore’s Law, for the Intel co-founder Gordon Moore and his famous 1965 observation that the number of transistors on a chip doubles every year (later revised to every two years), thereby doubling the speed and capability of computers.

This exponential growth of power on ever-smaller chips has reliably driven our technology for the past 50 years or so. But Moore’s Law is coming to an end, due to an even more immutable law: material physics. It simply isn’t possible to squeeze more transistors onto the tiny silicon wafers that make up today’s processors.

Compounding matters, the general-purpose chip architecture in wide use today, known as x86, which has brought us to this point, isn’t optimized for computing applications that are now becoming popular.

That means we need a new computing architecture. Or, more likely, multiple new computer architectures. In fact, I predict that over the next few years we will see a flowering of new silicon architectures and designs that are built and optimized for specialized functions, including data intensity, the performance needs of artificial intelligence and machine learning and the low-power needs of so-called edge computing devices.

The new architects

We’re already seeing the roots of these newly specialized architectures on several fronts. These include Graphic Processing Units from Nvidia, Field Programmable Gate Arrays from Xilinx and Altera (acquired by Intel), smart network interface cards from Mellanox (acquired by Nvidia) and a new category of programmable processor called a Data Processing Unit (DPU) from Fungible, a startup Mayfield invested in.  DPUs are purpose-built to run all data-intensive workloads (networking, security, storage) and Fungible combines it with a full-stack platform for cloud data centers that works alongside the old workhorse CPU.

These and other purpose-designed silicon will become the engines for one or more workload-specific applications — everything from security to smart doorbells to driverless cars to data centers. And there will be new players in the market to drive these innovations and adoptions. In fact, over the next five years, I believe we’ll see entirely new semiconductor leaders emerge as these services grow and their performance becomes more critical.

Let’s start with the computing powerhouses of our increasingly connected age: data centers.

More and more, storage and computing are being done at the edge; that means, closer to where our devices need them. These include things like the facial recognition software in our doorbells or in-cloud gaming that’s rendered on our VR goggles. Edge computing allows these and other processes to happen within 10 milliseconds or less, which makes them more work for end users.

I commend the entrepreneurs who are putting the silicon back into Silicon Valley.

With the current arithmetic computations of x86 CPU architecture, deploying data services at scale, or at larger volumes, can be a challenge. Driverless cars need massive, data-center-level agility and speed. You don’t want a car buffering when a pedestrian is in the crosswalk. As our workload infrastructure — and the needs of things like driverless cars — becomes ever more data-centric (storing, retrieving and moving large data sets across machines), it requires a new kind of microprocessor.

Another area that requires new processing architectures is artificial intelligence, both in training AI and running inference (the process AI uses to infer things about data, like a smart doorbell recognizing the difference between an in-law and an intruder). Graphic Processing Units (GPUs), which were originally developed to handle gaming, have proven faster and more efficient at AI training and inference than traditional CPUs.

But in order to process AI workloads (both training and inference), for image classification, object detection, facial recognition and driverless cars, we will need specialized AI processors. The math needed to run these algorithms requires vector processing and floating-point computations at dramatically higher performance than general purpose CPUs provide.

Several startups are working on AI-specific chips, including SambaNova, Graphcore and Habana Labs. These companies have built new AI-specific chips for machine intelligence. They lower the cost of accelerating AI applications and dramatically increase performance. Conveniently, they also provide a software platform for use with their hardware. Of course, the big AI players like Google (with its custom Tensor Processing Unit chips) and Amazon (which has created an AI chip for its Echo smart speaker) are also creating their own architectures.

Finally, we have our proliferation of connected gadgets, also known as the Internet of Things (IoT). Many of our personal and home tools (such as thermostats, smoke detectors, toothbrushes and toasters) operate on ultra-low power.

The ARM processor, which is a family of CPUs, will be tasked for these roles. That’s because gadgets do not require computing complexity or a lot of power. The ARM architecture is perfectly designed for them. It’s made to handle smaller number of computing instructions, can operate at higher speeds (churning through many millions of instructions per second) and do it at a fraction of the power required for performing complex instructions. I even predict that ARM-based server microprocessors will finally become a reality in cloud data centers.

So with all the new work being done in silicon, we seem to be finally getting back to our original roots. I commend the entrepreneurs who are putting the silicon back into Silicon Valley. And I predict they will create new semiconductor giants.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Why chipmaker Broadcom is spending big bucks for aging enterprise software companies – gpgmail


Last year Broadcom, a chipmaker, raised eyebrows when it acquired CA Technologies, an enterprise software company with a broad portfolio of products, including a sizable mainframe software tools business. It paid close to $19 billion for the privilege.

Then last week, the company opened up its wallet again and forked over $10.7 billion for Symantec’s enterprise security business. That’s almost $30 billion for two aging enterprise software companies. There has to be some sound strategy behind these purchases, right? Maybe.

Here’s the thing about older software companies. They may not out-innovate the competition anymore, but what they have going for them is a backlog of licensing revenue that appears to have value.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

How Are Process Nodes Defined?


This site may earn affiliate commissions from the links on this page. Terms of use.

We talk a lot about process nodes at ExtremeTech, but we don’t often refer back to what a process node technically is. With Intel’s 10nm node moving towards production, I’ve noticed an uptick in conversations around this issue and confusion about whether TSMC and Samsung possess a manufacturing advantage over Intel (and, if they do, how large an advantage they possess).

Process nodes are typically named with a number followed by the abbreviation for nanometer: 32nm, 22nm, 14nm, etc. There is no fixed, objective relationship between any feature of the CPUSEEAMAZON_ET_135 See Amazon ET commerce and the name of the node. This was not always the case. From roughly the 1960s through the end of the 1990s, nodes were named based on their gate lengths. This chart from IEEE shows the relationship:

lithot1

For a long time, gate length (the length of the transistor gate) and half-pitch (half the distance between two identical features on a chip) matched the process node name, but the last time this was true was 1997. The half-pitch continued to match the node name for several generations but is no longer related to it in any practical sense. In fact, it’s been a very long time since our geometric scaling of processor nodes actually matched with what the curve would look like if we’d been able to continue actually shrinking feature sizes.

2010-ITRS-Summary

Well below 1nm before 2015? Pleasant fantasy.

If we’d hit the geometric scaling requirements to keep node names and actual feature sizes synchronized, we’d have plunged below 1nm manufacturing six years ago. The numbers that we use to signify each new node are just numbers that companies pick. Back in 2010, the ITRS (more on them in a moment) referred to the technology chum bucket dumped in at every node as enabling “equivalent scaling.” As we approach the end of the nanometer scale, companies may begin referring to angstroms instead of nanometers, or we may simply start using decimal points. When I started work in this industry it was much more common to see journalists refer to process nodes in microns instead of nanometers — 0.18-micron or 0.13-micron, for example, instead of 180nm or 130nm.

How the Market Fragmented

Semiconductor manufacturing involves tremendous capital expenditure and a great deal of long-term research. The average length of time between when a new technological approach is introduced in a paper and when it hits widescale commercial manufacturing is on the order of 10-15 years. Decades ago, the semiconductor industry recognized that it would be to everyone’s advantage if a general roadmap existed for node introductions and the feature sizes those nodes would target. This would allow for the broad, simultaneous development of all the pieces of the puzzle required to bring a new node to market. For many years, the ITRS — the International Technology Roadmap for Semiconductors — published a general roadmap for the industry. These roadmaps stretched over 15 years and set general targets for the semiconductor market.

SemiconductorRoadmap

Image by Wikipedia

The ITRS was published from 1998-2015. From 2013-2014, the ITRS reorganized into the ITRS 2.0, but soon recognized that the scope of its mandate — namely, to provide “the main reference into the future for university, consortia, and industry researchers to stimulate innovation in various areas of technology” required the organization to drastically expand its reach and coverage. The ITRS was retired and a new organization was formed called IRDS — International Roadmap for Devices and Systems — with a much larger mandate, covering a wider set of technologies.

This shift in scope and focus mirrors what’s been happening across the foundry industry. The reason we stopped tying gate length or half-pitch to node size is that they either stopped scaling or began scaling much more slowly. As an alternative, companies have integrated various new technologies and manufacturing approaches to allow for continued node scaling. At 40/45nm, companies like GF and TSMC introduced immersion lithography. Double-patterning was introduced at 32nm. Gate-last manufacturing was a feature of 28nm. FinFETs were introduced by Intel at 22nm and the rest of the industry at the 14/16nm node.

Companies sometimes introduce features and capabilities at different times. AMD and TSMC introduced immersion lithography at 40/45nm, but Intel waited until 32nm to use that technique, opting to roll out double-patterning first. GlobalFoundries and TSMC began using double-patterning more at 32/28nm. TSMC used gate-last construction at 28nm, while Samsung and GF used gate-first technology. But as progress has gotten slower, we’ve seen companies lean more heavily on marketing, with a greater array of defined “nodes.” Instead of waterfalling over a fairly large numerical space (90, 65, 45) companies like Samsung are launching nodes that are right on top of each other, numerically speaking:

I think you can argue that this product strategy isn’t very clear, because there’s no way to tell which process nodes are evolved variants of earlier nodes unless you have the chart handy. But a lot of the explosion in node names is basically marketing.

Why Do People Claim Intel 7nm and TSMC/Samsung 10nm Are Equivalent?

While node names are not tied to any specific feature size, and some features have stopped scaling, semiconductor manufacturers are still finding ways to improve on key metrics. The chart below is drawn from WikiChip, but it combines the known feature sizes for Intel’s 10nm node with the known feature sizes for TSMC’s and Samsung’s 7nm node. As you can see, they’re very similar:

Intel-10-Foundry-7

Image by ET, compiled from data at WikiChip

The delta 14nm / delta 10nm column shows how much each company scaled a particular feature down from its previous node. Intel and Samsung have a tighter minimum metal pitch than TSMC does, but TSMC’s high-density SRAM cells are smaller than Intel’s, likely reflecting the needs of different customers at the Taiwanese foundry. Samsung’s cells, meanwhile, are even smaller than TSMC’s. Overall, however, Intel’s 10nm process hits many of the key metrics as what both TSMC and Samsung are calling 7nm.

Individual chips may still have features that depart from these sizes due to particular design goals. The information manufacturers provide on these numbers are for a typical expected implementation on a given node, not necessarily an exact match for any specific chip.

There have been questions about how closely Intel’s 10nm+ process (used for Ice Lake) reflects these figures (which I believe were published for Cannon Lake). It’s true that the expect specifications for Intel’s 10nm node may have changed slightly, but 14nm+ was an adjustment from 14nm as well. Intel has stated that it is still targeting a 2.7x scaling factor for 10nm relative to 14nm, so we’ll hold off on any speculation about how 10nm+ may be slightly different.

Pulling It All Together

The best way to understand the meaning of a new process node is to think of it as an umbrella term. When a foundry talks about rolling out a new process node, what they are saying boils down to this:

“We have created a new manufacturing process with smaller features and tighter tolerances. In order to achieve this goal, we have integrated new manufacturing technologies. We refer to this set of new manufacturing technologies as a process node because we want an umbrella term that allows us to capture the idea of progress and improved capability.”

Any additional questions on the topic? Drop them below and I’ll answer them.

Now Read: 




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Samsung posts 55.6% drop in second-quarter profit as it copes with weak demand and a trade dispute – gpgmail


As it forecast earlier this month, Samsung reported a steep drop in its second-quarter earnings due to lower market demand for chips and smartphones. The company said its second-quarter operating profit fell 55.6% year-over-year to 6.6 trillion won (about $5.6 billion), on consolidated revenue of 56.13 trillion won, slightly above the guidance it issued three weeks ago.

Last quarter, Samsung also reported that its operating profit had dropped by more than half. The same issues that hit its earnings during the first quarter of this year have continued, including lower memory prices as major datacenter customers adjust their inventory, meaning they are currently buying less chips (the weak market also impacted competing semiconductor maker SK Hynix’s quarterly earnings).

Samsung reported that its chip business saw second-quarter operating profit drop 71% year-over-year to 3.4 trillion won, on consolidated revenue of 16.09 trillion won. In the second half of the year, the company expects to continue dealing with market uncertainty, but says demand for chips will increase “on strong seasonality and adoption of higher-density products.”

Meanwhile, Samsung’s mobile business reported a 42% drop in operating profit from a year ago to 1.56 trillion won, on 25.86 trillion won in consolidated revenue. The company said its smartphone shipments increased quarter-over-quarter thanks to strong sales of its budget Galaxy A series. But sales of flagship models fell, due to “weak sales momentum for the Galaxy S10 and stagnant demand for premium products.”

Samsung expects the mobile market to remain lackluster, but it will continue adding to both its flagship and mass-market lineups. It is expected to unveil the Note 10 next month and a new release date for the delayed Galaxy Fold, along with new A series models in the second half of the year.

“The company will promptly respond to the changing business environment, and step up efforts to secure profitability by enhancing efficiency across development, manufacturing and marketing operations,” Samsung said in its earnings release.

It’s not just market demand that’s impacting Samsung’s earnings. Along with other tech companies, Samsung is steeling itself for the long-term impact of a trade dispute between Japan and South Korea. Last month, Japan announced that it is placing export restrictions on some materials used in chips and smartphones. Samsung said it still has stores of those materials, but it is also looking for alternatives since it is unclear how long the dispute between the two countries may last (and it could last for a long time).


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

AMD Ryzen 7nm CPUs May Not Hit Maximum Boost Frequency on All Cores


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD’s third-generation Ryzen processors have been a massive hit for the company by all reports, with excellent performance relative to Intel’s Core CPUs. There have, however, been a few questions around yield, overclocking, and boost frequencies. CPU overclocks on Ryzen are notably low, and some enthusiasts have noticed a limited number of cores on their CPUs hit the targeted boost frequencies.

Tom’s Hardware has done a significant deep-dive into this issue and came away with a number of key findings. In the past, AMD CPUsSEEAMAZON_ET_135 See Amazon ET commerce were capable of hitting their top-rated boost frequencies on any CPU cores. Intel chips are designed similarly. With Ryzen 3000, apparently only up to one core needs to be capable of hitting its maximum or near-maximum boost frequency. The scheduler updates baked into Windows 10 were said to speed power state transitions (which they do), but they also assign workloads specifically to the fastest cores capable of hitting a given clock.

These findings may explain why all-core overclocking headroom on these new Ryzen 7 processors is so low. On the Ryzen 7 3600X, only one CPU core proved capable of hitting 4.35GHz, for example, with other cores on the same chip boosting to 75-100MHz lower. AMD has not released exact specs for what frequencies its cores need to be able to hit to satisfy its own internal metrics for launch, which means we don’t really “know” which frequencies these CPU cores will operate at. This is definitely a change from previous parts, where all cores could be more-or-less assumed to be capable of hitting the same boost frequencies, and it may have implications for overclockers — but it doesn’t really change my opinion on AMD’s 7nm Ryzen CPUs. If anything, I suspect it’s a harbinger of where the industry is headed in the future.

Building Faster Silicon Today Means Working Smarter, Not Harder

One of the topics I’ve covered repeatedly at ExtremeTech is the difficulty of scaling either IPC (instructions-per-clock, a measure of CPU efficiency) or clock speed as process technology continues to shrink. From December 2018 – June 2019, I wrote a number of articles pushing back against various AMD fans who insisted the company would use 7nm to make huge clock leaps above Intel. When we met AMD at E3 2019, company engineers told us point-blank that they expected no clock improvements at 7nm whatsoever initially, and were very pleased to be able to improve clocks modestly in the final design.

One of the major difficulties semiconductor foundries are dealing with on 7nm and lower nodes is increased variability. Increased variation in parts means the chance of getting a wider “spread” on which cores are capable of running at specific frequency and voltage settings. AMD adapted Adaptive Voltage and Frequency Scaling back with Carrizo in part because AVFS can be used to control for process variation by more precisely matching CPU internal voltages with the specific requirements of the processor. Working with Microsoft to ensure Windows runs workloads on the highest-clocked CPU core isn’t just a good idea; it’s going to be a necessary method of extracting maximum performance in the future.

Intel’s decision to introduce Turbo Boost with Sandy Bridge back in 2011 was one of the smartest moves the company ever made. Intel’s engineers accurately forecast it was going to become increasingly difficult to guarantee maximum clocks under all circumstances. There’s no arguing what AMD is doing here represents a fundamental shift from the company’s approach in years past, but it’s one I strongly believe we’re going to see more companies embracing in the future. Higher silicon variability is going to demand a response from software. The entire reason the industry has shifted towards chiplets is that building entire dies on 7nm is seen as a fool’s errand, given the way cost scales with large die sizes, as the slide below shows.

Why move to AVFS? To decrease variability. Why move to chiplets? To cut manufacturing costs and improve yields overall. Why change Windows scheduling to be aware of per-core boost frequencies? To ensure end-users receive the full measure of performance they pay for. While it’s true Intel CPUs may be able to hit boost frequencies on any core, that doesn’t mean this state of affairs was objectively better for the end-user. Windows’ typical core-shuffling is not some unalloyed good, a fact Paul Alcorn notes in his article. “Typically we would see more interspersed frequency jumps among cores,” Alcorn writes, “largely due to the Windows scheduler’s irritating and seemingly irrational tendency to allocate threads into different cores on a whim.” Meanwhile, we know the boost frequency Intel CPUs will practically hold still depends directly on how many CPU cores are being loaded. The fact that all CPU cores can reach higher clocks does not necessarily benefit the end-user in any way unless said user is overclocking — and statistically, most computer users don’t.

But because it’s getting harder to eke out frequency boosts and performance improvements, manufacturers are investing in technologies that tap the reservoir of performance in any given CPU solely for their own use. This is why high-end overclocking is slowly dying and has been for at least the past seven years. AMD and Intel are getting better and better at making limited frequency headroom in their products available to end-users without overclocking because overclocking these CPUs in the conventional fashion blows their power curve out so severely. It wouldn’t surprise me to discover AMD went with this method of clocking because it improved performance more at lower power compared with launching lower-clocked chips with a more conventional all-core boost arrangement.

The old rules of process node transitions and silicon designs have changed. That’s the bottom line. I’m confident we’ll see Intel deploying advanced tactics of its own to deal with these concerns in the future because there is zero evidence to suggest these issues are unique to AMD or TSMC. AMD’s adoption of AVFS, the rising use of chiplets across the industry, the lower expected clocks at 7nm that were turned into a small gain thanks to clever engineering — all of these issues point in the same direction. Companies will undoubtedly develop their own particular solutions, but everyone is grappling with the same fundamental set of problems.

Good Communication is Key

AMD, to its credit, did tell users they needed to be running the latest chipset driver and the Windows 1903 update to take advantage of the new scheduler. Implied in that rhetoric was not doing so would prevent you from seeing the full impact of third-generation Ryzen’s improved performance. I do agree the company should have disclosed this new binning strategy to the technical press at E3, so we could detail it during the actual review.

But does this change my overall evaluation of third-generation Ryzen? No. Not in any fashion. The work THG has done to explore this issue is quite thorough, but based on the reading I’ve done on the evolution of process technology in modern manufacturing, I come down firmly on the side of this being a good thing. It’s the extension of the same trend that led ARM to invent big.Little — namely, the idea that the OS needs to be more tightly coupled to the underlying hardware, with a greater awareness of what CPU cores should be used for which workloads in order to maximize performance and minimize power consumption.

According to AMD, roughly 25 percent of the performance improvements of the past decade have come from better compilers and improved power management. That percentage will likely be even larger 10 years from now. Power consumption at both idle and load is now the largest enemy of improved silicon performance, and variability in silicon process is a major cause of power consumption. Improving performance in the future is going to rely on different tools than the ones we’ve used for the past few decades, and one of the likely consequences of that push is the end of overclocking. Manufacturers can’t afford to leave 10, 20, 30 percent performance margins on the table any longer. Those margins represent a significant percentage of the total improvements they can offer.

Do these findings have implications for the currently limited availability on the Ryzen 9 3900X? We don’t know. Certainly, it’s possible the two are connected and that AMD is having trouble getting yield on the chip. Ultimately, I stand by what I said in our article on AMD CPUSEEAMAZON_ET_135 See Amazon ET commerce availability earlier today — we’ll give the company a little more time to get product into market and revisit the topic in a few more weeks. But the CPU’s performance is excellent. Its power consumption, particularly if paired with an X470 motherboard, is excellent. We’re still working on future Ryzen articles and have been working with these CPUs for several weeks. The performance and overall power characteristics are fundamentally strong, and while the THG findings are quite interesting for what they say about AMD’s overall strategy going forward and what I believe is the general increase in variability in semiconductors as a whole, I view them as broadly confirming the direction the industry is moving in. Dealing with intrinsically higher silicon variability will be one of the major challenges of the 2020s.

I hesitate to bring Intel into this conversation at all, because we haven’t even seen the company’s latest iteration of its 10nm process yet, but it’s surely no accident the company’s upcoming mobile processors have sharply reduced maximum Turbo Boosts (4.1GHz for Ice Lake, compared to 4.8GHz for 14nm Whiskey Lake). Some of that may be explained by the wider graphics core that’s built into Gen 11, but Intel forecast from the beginning that 14nm++ would be a better node for high-frequency chips than its initial 10nm process. That doesn’t mean Intel has adopted AMD’s new clocking method, but it does show the company is grappling with some of these same issues around frequency, variation, and power consumption, and working to find its own ideal balance.

The challenges are getting tougher. There are no more easy wins. The interplay between software and hardware is going to change in the future because the alternative — simply giving up and going home — isn’t a tenable one. That may have trickle-down effects that impact other aspects of computing, including overclockers and enthusiasts. But it doesn’t change the fact, in this reviewer’s opinion, that the Ryzen 7 3000 family are an excellent set of CPUs.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Apple Could Switch to ARM, But Replacing Xeon Is No Simple Endeavor


This site may earn affiliate commissions from the links on this page. Terms of use.

The question of when Apple will switch to building its own custom ARM CPU cores for its software ecosystem rather than using Intel and x86 comes up on a regular basis. On ET, we first covered the topic in 2011, and I’ve hit it several times in the intervening years. My answer has typically been some flavor of “theoretically yes, but practically (and in terms of the near future), no.”

A recent AppleInsider article does a good job of rounding up the reasons why Apple really might be taking this step soon. We’ve previously heard rumors that the company could launch such a product in 2020, and while rumors are not the same thing as a definite launch date, the piece is solid. It makes a reasonable case for why Apple may indeed take this step and references various real-world events, including Intel’s difficulties moving on past 14nm, Apple’s design efforts around GPUsSEEAMAZON_ET_135 See Amazon ET commerce and CPUs, the increasing complexity and capability of its SoCs, and the fact that Apple has built its own secondary chips, like the T2 controller.

All of these points are true, and it’s why I think the 2020 rumor deserves to be taken more seriously than the dates and ideas that we used to hear. But there is still a major piece of this puzzle that doesn’t get talked about often enough. Apple can introduce an ARM core running full macOS, but if it wants to replace x86 in its highest-end iMac Pro and Mac Pro products, it’s going to have to take on some significant design challenges that it hasn’t faced before.

Intel’s Skylake mesh interconnect. This is anything but easy to build and design.

Apple has built CPUs, yes. But it’s never tried to build, say, a 28-32-core ARM processor in a multi-socket system. To the best of my knowledge, Apple has never built a server-class chipset or designed a CPU socket for its own product families. During E3, I attended an AMD session on the evolution of its AM4 socket, and how carefully AMD had to work in order to design a 7nm product with chiplets to fit into a socket that initially deployed four identical CPU cores in a 28nm process node. Even if Apple intends to create a platform without upgradable CPUs, it will need to design its own motherboards. The socket design decisions that it makes will impact how quickly it can iterate the platform and how much work has to be done at a later time. Achievable? Absolutely. But not something one does overnight.

The routing on AMD Ryzen 7 3000 PCB. That’s the connection between one chiplet and its I/O die. This isn’t easy to design, either.

Using chiplets makes some aspects of CPU design easier, especially on leading-edge nodes, but it doesn’t simplify everything. Chiplets require interconnects, like AMD’s Infinity Fabric. Apple would need to design its own solution (there are no formal chiplet interconnect standards yet). There’s a lot of custom IP work to be done here if Apple wants to bring a part to market to replace what Intel offers in the Mac Pro.

One simple solution is for Apple to launch new ARM chips in laptops but keep desktop systems on Intel for the time being. In theory, this works fine, provided the ecosystem is ready for it and Apple can deliver appropriate binaries for applications. Software application support and user expectations could be tricky to manage here, but it’s doable. The problems for Apple, in this case, are making sure that its consumers understand any compatibility issues that might exist and that the new ARM-based products are clearly differentiated from the old x86 ones.

Is There a Reason for Apple Not to Build Its Own Mac CPUs?

There is, in fact, a reason for Apple not to build its own CPU cores for Mac. There is a non-trivial amount of work that must be done to launch a laptop/desktop processor line. Doing all of the work of developing interconnects, chiplets, chipsets, and motherboards from the ground up is more difficult and expensive than working with someone else’s pre-defined product standard and manufacturing. There’s an awful lot of work that Intel does on Core that Apple doesn’t have to do.

The question of whether it makes sense for Apple to move away from Intel CPUsSEEAMAZON_ET_135 See Amazon ET commerce is therefore partially predicated on what kind of money Apple thinks it can make as a result of doing so. Obviously capturing the value of the microprocessor can sweeten the cost structure, but capturing the value also means capturing the cost. When Apple was a non-x86 shop, its market share was significantly smaller than it is today, and the company gained some market share immediately after switching to x86. It is impossible to tell if it gained that share because its software compatibility was now much improved or because many of its systems, especially laptops, were now far more competitive with their Windows counterparts.

Apple has to consider that it will lose at least some customers if it moves away from x86 compatibility again, either because of software compatibility or because its new chips may not offer a performance improvement in specific workloads relative to Intel. The most valuable CPUs — the ones powering the Mac Pro — are also the most expensive to design and build. If Apple doesn’t think it can command the price premiums that Xeon does, it might hold off on introducing CPUs in these segments until it believes it can. Unlike 2005, when IBM couldn’t produce a G5 that fit into a laptop, Apple isn’t quite that pinched as far as market segments.

I think Apple’s CPUs have evolved enough to make a jump towards ARM and away from x86 plausible in a way it wasn’t back in 2014, but there are still some significant questions to be answered about where Apple would sell the part and whether it would attempt to replace x86 in all products, or in specific mobile SKUs. And, honestly, I think there’s a version of this story where Apple ultimately continues to work with Intel or AMD long into the future, having decidedly to deploy its own ARM IP strategically across the Mac line, or in secondary positions similar to how the T2 chip is used.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something