Life, the Universe, and Math: 42 Proven to be the Sum of 3 Cubes


This site may earn affiliate commissions from the links on this page. Terms of use.

The problem of 42 — at least as it relates to whether the number could be considered the sum of three cubes — has finally been solved. The question of whether every number under 100 could be expressed in this fashion has been a long-standing puzzle in the world of mathematics. Now, two mathematicians, Andrew Sutherland of MIT and Andrew Booker of Bristol, have jointly proven that 42 is indeed the sum of three cubes.

For years, mathematicians have worked to demonstrate that x3+y3+z3 = k, where k is defined as the numbers from 1-100. By 2016, researchers had demonstrated that this theory held true in all cases except for two unproven exceptions: 33 and 42. The formal theory, as expressed by Roger Heath-Brown in 1992, is that every k unequal to 4 or 5 modulo 9 has infinitely many representations as the sum of three cubes. By closing this particular gap, we’ve now proven that all numbers below 113 fit this theory.

Earlier this year, Andrew Booker of Bristol was inspired by a Numberphile video to begin working on a solution. We’ve embedded that video below:

Booker came up with a new, more efficient algorithm to search for a solution to the problem for these two values. The solution for 33 took about three weeks to find once the problem was run through a supercomputer at the UK’s Advanced Computing Research Centre. 42 proved a tougher nut to crack, so Booker paired up with Andrew Sutherland, who is an expert in massively parallel computation in addition to being a mathematician. The two enlisted the help of the Charity Engine, a distributed computing project that allows PCs to make money for charities through the donation of computing time.

Over a million hours of computation later, the team had its solution. In the equation x3+y3+z3 = k, let x = -80538738812075974, y = 80435758145817515, and z = 12602123297335631. Plug it all in, and you get (-80538738812075974)3 + 804357581458175153 + 126021232973356313 = 42. And with that, we’ve found solutions for all the values of k up to 100 (technically, up to 113).

“I feel relieved,” Booker said. “In this game, it’s impossible to be sure that you’ll find something. It’s a bit like trying to predict earthquakes, in that we have only rough probabilities to go by. So, we might find what we’re looking for with a few months of searching, or it might be that the solution isn’t found for another century.”

It may not prove that 42 is the Answer to the Ultimate Question of Life, the Universe, and Everything, but Douglas Adams clearly made the case for that solution in the mathematical and philosophical textbook, The Hitchhiker’s Guide to the Galaxy. Efforts to understand the Ultimate Question remain mired in disgruntled physics equations regarding the intrinsic difficulty of building planet-sized supercomputers with molten iron for a central core.

Top image credit: Martinultima/Wikipedia 

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

IBM Open-Sources Power ISA, Shares CPU, OpenCAPI Reference Designs


This site may earn affiliate commissions from the links on this page. Terms of use.

IBM has taken new steps to open the Power architecture further and expand access to its capabilities. Back in 2013, IBM launched the OpenPower Foundation to allow would-be customers to license IBM designs and collaborate with each other. Now, the company has open-sourced the entire Power ISA and contributed a softcore design to the effort, suitable for running on an FPGA. The OpenPower Foundation, which was founded back in 2013, will now become part of the Linux Foundation and the entire project will be overseen by that organization.

IBM has been turning towards open source as a method for reinvigorating its hardware business and stoking interest in its own ecosystem. In addition to open-sourcing the Power ISA, the company has also contributed designs for its Open Coherent Accelerator Processor Interface (OpenCAPI) and Open Memory Interface (OMI). These are the interface protocols that attach the CPUSEEAMAZON_ET_135 See Amazon ET commerce to the rest of the system, and they’re critical to making the total project work. OpenCAPI and OMI are both architecture-agnostic and could theoretically be adapted for usage in both x86 and Power systems, should vendors build compatible solutions.

Ken King, general manager of OpenPower at IBM, told Next Platform that the plan to open source the ISA had been a long time coming:

We started OpenPower six years ago because the industry was seeing the decline of Moore’s Law, and we were seeing the need for more powerful systems to support HPC, artificial intelligence, and data analytics. We needed to find other ways to drive system performance, and with limitations on the processor, the ability to integrate and innovate up and down the stack was becoming more critical. This led to things like NVLink with Nvidia, a close relationship with Mellanox on interconnects, and OpenCAPI for other devices, and we have seen some progress here.

But we are also seeing a shift in the industry, with companies moving to more open hardware. IBM opening up Power to the point where we would license the CPU RTL to others so they could design their own processors was limited in its effect because there were not that many people who wanted to spend many hundreds of millions of dollars – not for license fees, but for full development – to create their own high-end CPU. We did make some progress in opening up our reference designs, and there are over 20 vendors who are now making Power-based systems.

We are seeing interesting developments with the nascent RISC-V architecture, and hyperscalers are hiring their own chip designers and building their own CPUs and interconnects. They are getting into the hardware space, even if they are not going to be hardware vendors, to drive that performance.

OpenCapi-Roadmap

OpenCAPI roadmap.

Under the Linux Foundation, IBM and other members will vote on the future of the standard, including feature set expansions and new capabilities. IBM can continue to make changes to its own ISA for its own purposes, but all other modifications require a membership vote. Members are required to maintain compatibility with the base ISA. Permission to make a non-compliant change will require a unanimous vote. King has openly stated that he hopes Intel will be willing to explore the benefits of using OpenCAPI now that the standard is available in this method, leading to a convergence of support between OpenCAPI and CXL, Intel’s competing interconnect standard.

IBM is hoping that completely open-sourcing the ISA will spur adoption and development. There’s some reason for optimism. Another open-source ISA, RISC-V, has been generating many headlines and interest from various silicon vendors. Current RISC-V CPUs, however, are all low-end embedded hardware. In theory, Power might be an easier lift if its toolchains are more mature and better-featured.

“The opening of the Power ISA, an architecture with a long and distinguished history, will help the open hardware movement continue to gain momentum,” Mateo Valero, Director of Barcelona Supercomputing Center, told InsideHPC. “BSC, which has collaborated with IBM for more than two decades, is excited that IBM’s announcements today provide additional options to initiatives pursuing innovative new processor and accelerator development with freedom of action.”

Though it hasn’t spent much time in the limelight of late, Power was once a major challenger to Intel’s competing x86 architecture in the server world. In recent years, the OpenPower initiative has enjoyed support from a number of companies, with over 250 members as of 2016. Today, it still enjoys market share in the HPC space — the Summit and Sierra supercomputers are #1 and #2 in the world. The third machine is based on custom Sunway architecture. The highest-powered x86 computer is China’s Tianhe-2A, based on the Xeon E5-2692v2. There’s nothing stopping a company from taking Power into new markets (at least in theory), though it would be quite difficult to convince smartphone vendors to rally around Power as opposed to ARM. Still, open-sourcing the architecture can’t hurt its uptake effort.

We’d expect to see the most energy and excitement around Power in servers and HPC because that’s where the core of IBM’s efforts have historically been focused. But we wouldn’t be surprised to see the ISA popping up in other places, either.

Feature Image: IBM Power8 microprocessor 

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Scientists Use ‘UniverseMachine’ to Simulate 8 Million Universes


This site may earn affiliate commissions from the links on this page. Terms of use.

Scientists saw the first hints of the effect dark matter has on the universe decades ago, but there’s still a great deal we don’t know about it. It’s difficult to determine the nature of dark matter interactions because we only have the one universe to observe. That’s why researchers from the University of Arizona created 8 million universes with varying conditions inside a supercomputer.

Our current understanding of the role played by dark matter is limited, but most researchers agree on the basics. After the Big Bang, the nebulous material we know as dark matter began clumping together into clouds known as dark matter haloes. Since dark matter makes up most of the matter in the universe, these haloes pulled in hydrogen atoms with the force of gravity, causing them to coalesce into the first stars. Many scientists believe that dark matter continues to form the backbone of galaxies to this day. 

In an effort to learn more about the mechanisms at work, astronomer Peter Behroozi from the University of Arizona used the school’s supercomputer to play god and create millions of simulated universes. Each of the 8 million universes had a unique set of physical constants to help researchers understand how dark matter affects regular matter over time. By comparing these results to the real universe, we can learn which rules match up with reality. The team calls this the “UniverseMachine.”

The University of Arizona’s Ocelote supercomputer has 336 nodes, each with two 28-core CPUs and 192GB of working memory. There’s also a separate large memory node with 2 terabytes of RAM for handling especially large data sets. Behroozi kept 2,000 Ocelote CPU cores busy non-stop for over three weeks to simulate all those universes. 

The University of Arizona Ocelote supercomputer.

We lack the technology to simulate every aspect of a whole universe — simulating a single galaxy accurately would take more computing power than Earth could muster in a hundred years. So, Behroozi and his colleagues focused on two of the most important properties in astronomy: the mass of stars and the rate at which new stars form. 

Little by little, the researchers honed in on properties that made their simulations more like the real thing. The findings could force a rethink of how dark matter affects star formation, too. According to Behroozi, denser dark matter in the early universe seemingly didn’t negatively impact star formation rates as expected. In fact, galaxies of a given size were more likely to form stars at a high rate for much longer. 

Behroozi is excited about what this model could teach us in the future. Soon, the team plans to expand the variables simulated in the UniverseMachine, including how often stars die in supernovae and how dark matter affects galaxy shapes.

Now read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

$600M Cray supercomputer will tower above the rest — to build better nukes – gpgmail


Cray has been commissioned by Lawrence Livermore National Laboratory to create a supercomputer head and shoulders above all the rest, with the contract valued at some $600 million. Disappointingly, El Capitan, as the system will be called, will be more or less solely dedicated to redesigning our nuclear armament.

El Capitan will be the third “exascale” computer being built by Cray for the U.S. government, the other two being Aurora for Argonne National Lab and Frontier for Oak Ridge. These computers are built on a whole new architecture called Shasta, in which Cray intends to combine the speed and scale of high-performance computing with the easy administration of cloud-based enterprise tools.

Due for delivery in 2022, El Capitan will be operating on the order of 1.5 exaflops, or floating point operations per second, a measure of calculation often used to track supercomputer performance. Exa denotes a quintillion of something.

Right now the top dog is already at Oak Ridge: an IBM-built system called Frontier. At about 1.5 petaflops, it’s about 1/10th the power of Aurora — of course, the former is operational and the latter is theoretical right now, but you get the idea.

One wonders exactly what all this computing power is needed for. There are in fact countless domains of science that could be advanced by access to a system like El Capitan — simulations of atmospheric and geological processes, for instance, could be simulated in 3D at a larger scale and higher fidelity than ever before.

So it was a bit disheartening to learn that El Capitan will, once fully operational, be dedicated almost solely to classified nuclear weaponry design.

To be clear, that doesn’t just mean bigger and more lethal bombs. The contract is being carried out with the collaboration of the National Nuclear Security Administration, which of course oversees the nuclear stockpile alongside the Department of Energy and military. It’s a big operation, as you might expect.

We have an aging nuclear weapons stockpile that was essentially designed and engineered over a period of decades ending in the ’90s. We may not need to build new ones, but we do actually have to keep our old ones in good shape, not just in case of war but to prevent them from failing in their advancing age and decrepitude.

The components of Cray’s Shasta systems

“We like to say that while the stockpile was designed in two dimensions, it’s actually aging in three,” said LLNL director Bill Goldstein in a teleconference call on Monday. “We’re currently redesigning both warhead and delivery system. This is the first time we’ve been doing this for about 30 years now. This requires us to be able to simulate the interaction between the physics of the nuclear system and the engineering features of the delivery system. These are real engineering interactions and are truly 3D. This is an example of a new requirement that we have to meet, a new problem that we have to solve, and we simply can’t rely on two dimensional simulations to get at. And El Capitan is being delivered just in time to address this problem.”

Although in response to my question, Goldstein declined to provide a concrete example of a 3D versus 2D research question or result, citing the classified nature of the work, it’s clear that his remarks are meant to be taken both literally and figuratively. The depth, so to speak, of factors affecting a nuclear weapons system may be said to have been much flatter in the ’90s, when we lacked the computing resources to do the complex physics simulations that might inform their design. So both conceptually and spatially the design process has expanded.

That said, let’s be clear: “warhead and delivery systems” means nukes, and that is what this $600 million supercomputer will be dedicated to.

There’s a silver lining there: Before being air-gapped and entering into its classified operations, El Capitan will have a “shakeout period” during which others will have access to it. So while for most of its life it will be hard at work on weapons systems, during its childhood it will be able to experience a wider breadth of scientific problems.

The exact period of time and who will have access to it is to be determined (this is still three years out), but it’s not an afterthought to quiet jealous researchers. The team needs to get used to the tools and work with Cray to refine the system before it moves on to the top-secret stuff. And opening it up to a variety of research problems and methods is a great way to do it, while also providing a public good.

Yet Goldstein referred to the 3D simulations of nuclear weapons physics as the “killer app” of the new computer system. Perhaps not the phrase I would have chosen. But it’s hard to deny the importance of making sure the nuclear stockpile is functional and not leaking or falling apart — I just wish the most powerful computer ever planned had a bit more noble purpose.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something