Facebook is making its own deepfakes and offering prizes for detecting them – gpgmail


Image and video manipulation powered by deep learning, or so-called “deepfakes,” represent a strange and horrifying facet of a promising new field. If we’re going to crack down on these creepy creations, we’ll need to fight fire with fire; Facebook, Microsoft, and many others are banding together to help make machine learning capable of detecting deepfakes — and they want you to help.

Though the phenomenon is still new, we are nevertheless in an arms race where the methods of detection vie with the methods of creation. Ever more convincing fakes appear regularly, and though while they are frequently benign, the possibility of having your face flawlessly grafted into a compromising position is very much there — and many a celebrity has already had it done to them.

Facebook, as part of a coalition with Microsoft, the Partnership for AI, and several universities including Oxford, Berkeley, and MIT, is working to empower the side of good with better detection techniques.

“The most interesting advances in AI have happened when there’s a clear benchmark on a dataset to write papers against,” said Facebook CTO Mike Schroepfer in a media call yesterday. The dataset for object recognition might be millions of images of ordinary objects, while the dataset for voice transcription would be hours of different kinds of speech. But there’s no such set for deepfakes.

We talked about this challenge at our Robotics and AI event earlier this year in what I thought was a very interesting discussion:

Fortunately Facebook is planning on dedicating around $10 million in resources to make this Deepfake Detection Challenge happen.

“Creation of these datasets can be challenging, because you want to make sure that everyone participating in it is clear and gives consent so they aren’t surprised by the usage of it,” Schroepfer continued. And since most deepfakes are made without any consent whatsoever, they’re not really permissible for usage in an academic context.

So Facebook and its partners are making the deepfake content out of whole cloth, he said. “You want a dataset of source video, and then a dataset of personalities you can map onto that. Then we’re spending engineering time implementing the latest most advanced deepfake techniques to generate altered videos as part of the dataset.”

And while you’re entirely justified in wondering, no, they aren’t using Facebook data to do this. They’ve got paid actors.

This dataset will be provided to interested parties, who will be able to build solutions and test them, putting the results on a leaderboard. At some point there will be cash prizes given out, though the details are a ways off. With luck this will spur serious competition among academics and researchers.

“We need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill-effects of manipulated multimedia,” said the University of Maryland’s Rama Chellappa in a news release. “By making available a large corpus of genuine and manipulated media, the proposed challenge will excite and enable the research community to collectively address this looming crisis.​”

Initial tests of the dataset are planned for the International Conference on Computer Vision in October, with the full launch happening at NeurIPS in December.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Cerebras Systems Unveils 1.2 Trillion Transistor Wafer-Scale Processor for AI


This site may earn affiliate commissions from the links on this page. Terms of use.

Credit: Getty Images

Modern CPU transistor counts are enormous — AMD announced earlier this month that a full implementation of its 7nm Epyc “Rome” CPU weighs in at 32 billion transistors. To this, Cerebras Technology says: “Hold my beer.” The AI-focused company has designed what it calls a Wafer Scale Engine. The WSE is a square, approximately eight inches by nine inches, and contains roughly 1.2 trillion transistors.

I’m genuinely surprised to see a company bringing a wafer-scale product to market this quickly. The idea of wafer-scale processing has attracted some attention recently as a potential solution to performance scaling difficulties. In the study we discussed earlier this year, researchers evaluated the idea of building an enormous GPU across most or all of a 100mm wafer. They found that the technique could product viable, high-performance processors and that it could also scale effectively to larger node sizes. The Cerebras WSE definitely qualifies as lorge large — its total surface area is much larger than the hypothetical designs we considered earlier this year. It’s not a full-sized 300mm wafer, but it’s got a higher surface area than a 200mm does.

The largest GPU,SEEAMAZON_ET_135 See Amazon ET commerce just for comparison, measures 815 square millimeters and packs 21.1B transistors. So the Cerebras WSE is just a bit bigger, as these things go. Some companies send out pictures of their chips held up next to a diminutive common object, like a quarter. Cerebras sent out a photo of their die next to a keyboard.

cerebras-1-100808712-large

Not Pictured: PCIe x1600 slot.

As you can see, it compares fairly well.

The Cerebras WSE contains 400,000 sparse linear algebra cores, 18GB of total on-die memory, 9PB/sec worth of memory bandwidth across the chip, and separate fabric bandwidth of up to 100Pbit/sec. The entire chip is built on TSMC’s 16nm FinFET process. Because the chip is built from (most) of a single wafer, the company has implemented methods of routing around bad cores on-die and can keep its arrays connected even if it has bad cores in a section of the wafer. The company says it has redundant cores implemented on-die, though it hasn’t discussed specifics yet. Details on the design are being presented at Hot Chips this week.

The WSE — “CPU” simply doesn’t seem sufficient — is cooled using a massive cold plate sitting above the silicon, with vertically mounted water pipes used for direct cooling. Because there’s no traditional package large enough to fit the chip, Cerebras has designed its own. PCWorld describes it as “combining a PCB, the wafer, a custom connector linking the two, and the cold plate.” Details on the chip, like its raw performance and power consumption, are not yet available.

A fully functional wafer-scale processor, commercialized at scale, would be an exciting demonstration of whether this technological approach has any relevance to the wider market. While we’re never going to see consumer components sold this way, there’s been interest in using wafer-scale processing to improve performance and power consumption in a range of markets. If consumers continue to move workloads to the cloud, especially high-performance workloads like gaming, it’s not crazy to think we might one day see GPU manufacturers taking advantage of this idea — and building arrays of parts that no individual could ever afford to power cloud gaming systems in the future.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Intel Announces Cooper Lake Will Be Socketed, Compatible With Future Ice Lake CPUs


This site may earn affiliate commissions from the links on this page. Terms of use.

Intel may have launched Cascade Lake relatively recently, but there’s another 14nm server refresh already on the horizon. Intel lifted the lid on Cooper Lake today, giving some new details on how the CPU fits into its product lineup with Ice Lake 10nm server chips already supposedly queuing up for 2020 deployment.

Cooper Lake’s features include support for the Google-developed bfloat16 format. It will also support up to 56 CPU cores in a socketed format, unlike Cascade Lake-AP, which scales up to 56 cores but only in a soldered, BGA configuration. The new socket will reportedly be known as LGA4189. There are reports that these chips could offer up to 16 memory channels (because Cascade Lake-AP and Cooper Lake both use multiple dies on the same chip, the implication is that Intel may launch up to 16 memory channels per socket with the dual-die version).

bfloat16-vs-float16

The bfloat16 support is a major addition to Intel’s AI efforts. While 16-bit half-precision floating point numbers have been defined in the IEEE 754 standard for over 30 years, bfloat16 changes the balance between how much of the format is used for significant digits and how much is devoted to exponents. The original IEEE 754 standard is designed to prioritize precision, with just five exponent bits. The new format allows for a much greater range of values but at lower precision. This is particularly valuable for AI and deep learning calculations, and is a major step on Intel’s path to improving the performance of AI and deep learning calculations on CPUs. Intel has published a whitepaper on bfloat16 if you’re looking for more information on the topic. Google claims that using bfloat16 instead of conventional half-precision floating point can yield significant performance advantages. The company writes: “Some operations are memory-bandwidth-bound, which means the memory bandwidth determines the time spent in such operations. Storing inputs and outputs of memory-bandwidth-bound operations in the bfloat16 format reduces the amount of data that must be transferred, thus improving the speed of the operations.”

The other advantage of Cooper Lake is that the CPU will reportedly share a socket with Ice Lake servers coming in 2020. One major theorized distinction between the two families is that Ice Lake servers on 10nm may not support bfloat16, while 14nm Cooper Lake servers will. This could be the result of increased differentiation in Intel’s product lines, though it’s also possible that it reflects 10nm’s troubled development.

Bringing 56 cores to market in a socketed form factor indicates Intel expects Cooper Lake to expand to more customers than Cascade Lake / Cascade Lake-AP targeted. It also raises questions about what kind of Ice Lake servers Intel will bring to market, and whether we’ll see 56-core versions of these chips as well. To-date, all of Intel’s messaging around 10nm Ice Lake has focused on servers or mobile. This may mirror the strategy Intel used for Broadwell, where the desktop versions of the CPU were few and far between, and the mobile and server parts dominated that family — but Intel also said later that not doing a Broadwell desktop release was a mistake and that the company had goofed by skipping the market. Whether that means Intel is keeping an Ice Lake desktop launch under its hat or if the company has decided skipping desktop again does make sense this time around is still unclear.

Cooper Lake’s focus on AI processing implies that it isn’t necessarily intended to go toe-to-toe with AMD’s upcoming 7nm Epyc. AMD hasn’t said much about AI or machine learning workloads on its processors, and while its 7nm chips add support for 256-bit AVX2 operations, we haven’t heard anything from the CPU division at the company to imply a specific focus on the AI market. AMD’s efforts in this space are still GPU-based, and while its CPUs will certainly run AI code, it doesn’t seem to be gunning for the market the way Intel has. Between adding new support for AI to existing Xeons, its Movidius and Nervana products, projects like Loihi, and plans for the data center market with Xe, Intel is trying to build a market for itself to protect its HPC and high-end server business — and to tackle Nvidia’s own current dominance of the space.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Leak Shows AMD Epyc 7742 Slugging it Out With Intel Xeon Platinum 8280


This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has kept details about its upcoming Epyc product family remarkably close to its chest. A recent leak (now deleted) at the publicly available Open Benchmarking database shows a tough competition between AMD’s upcoming 7nm Epyc CPUs and Intel’s equivalent Xeon products. Intel CEO Bob Swan has referred to AMD as offering increased competition in the back half of 2019, particularly in data center, so these figures aren’t automatically surprising — unless, of course, you remember the era just a few years ago when AMD’s market share in servers was basically zero.

According to the text of the now-deleted leak (picked up by THG before it went down), the AMD Epyc 7742 is a 64-core CPU with 128 threads, 256MB of L3 cache, a TDP of 225W, and a base / boost clock of 2.25GHz and 3.4GHz, respectively. The already-launched Epyc 7601 is a 32C/64T, 180W TDP CPU, with 64MB of L3 and a nearly-identical 2.2GHz base / 3.4GHz boost clock. The Xeon Platinum 8280 is 28C/56T, 2.7GHz base, 4GHz boost, and a 205W TDP, while the Xeon Gold 6138 (included for reference as well) is 20C/40T, 2GHz / 3.7GHz, and a 125W TDP.

If these rumors are accurate, AMD has managed to double core count and very slightly increase clock within a 1.25x larger TDP envelope. I am not sure what the “RDY1001C” refers to at the bottom of the results, though this configuration is always the fastest of the listed. Googling the term turned up no results.

There are more tests at THG than we’ve reproduced here; check their article for full results. And, as always, treat all results with a big ol’ bucket of caution. These are leaked results. Even if accurate, they may reflect engineering samples that are not representative of final performance.

SVT is a video encoder that’s heavily optimized for Intel CPUs, but optimizations for Intel chips often work well for AMD CPUs as well, and we certainly see that here. None of the encodes seem to scale particularly well when adding more cores, so we’re not going to try to make sense of the dualie figures. A single 7742 is significantly faster than the Xeon Platinum 8280 and the 7742 is more than twice as fast as the 7601.

In HEVC, the performance figures change. Here, Intel and AMD are at parity overall, but the 7742 is a huge uplift over and above the Epyc 7601.

POV-Ray 3.7 does scale with increased thread counts, but the gain from 1x CPU to 2x CPUs is much smaller from the 7742 as compared to the 7601. AMD only picks up about 24 percent more performance from adding another 64 cores, compared to 42 percent scaling for the Xeon Platinum 8280. This difference in scaling means that a pair of dual Xeon 8280’s nearly match a pair of Epyc 7742’s, even though one Epyc 7742 is significantly faster than one Xeon Platinum 8280.

Blender, and rendering more generally, are tests that AMD CPUs generally excel at. AMD decisively wins this test, though interestingly, we also see signs of significantly improved scaling for the Intel CPUs. This may simply reflect the fact that the Intel CPUs have far fewer cores. The Xeon Platinum 8280 is only a 28-core chip being compared to the performance of a 64-core chip. That’s a fairly massive advantage for AMD. Of course, there’s also the question of price and positioning — Intel has typically priced its Xeons far above AMD’s Epyc CPUs, and we tend to prioritize comparing on price above other factors.

Readers should, however, be aware that we may be seeing scaling issues on the AMD CPUs because of the sheer number of cores — 128C/256T, while the Xeon Platinum CPUs are only fielding 56 cores in a 2S configuration. The applications themselves may not scale well at these kinds of thread counts.

If these figures are accurate, they suggest AMD’s 7nm Epyc will be a significant challenge for Intel across a wider range of markets — which is pretty much exactly what we expected based on third-generation Ryzen and AMD’s previous statements about Epyc 2. Factor in Bob Swan’s acknowledgment of an increased competitive market, and we have a scenario teed up in which Intel will cut its Xeon prices, either by directly trimming them or when it launches Cooper Lake (currently expected in the first half of 2020). Intel’s CPU prices have historically run much higher than AMD’s, but it’s difficult to know exactly how much higher, because the company’s list prices (the best indicator we have to go on) don’t reflect what its volume customers actually pay.

If AMD’s Rome is as good as it looks, we should see increased OEM adoption of the part compared to first-generation Epyc, as well as some reaction from Intel. It can take server customers multiple product generations to move to new vendors, but they do eventually take notice.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something