Foundation Capital, now 24 years old, just closed its ninth fund with $350 million in capital commitments – gpgmail


Not all venture firms are long for this world. Though they tend to shut down exceedingly quietly, it sometimes happens when the returns just aren’t compelling or a firm grows too fast or there’s infighting or there’s not a solid succession plan.

Foundation Capital, founded in 1995, had its own kind of reckoning in the aftermath of the 2008 financial crisis, owing to a little bit of all of these things.

Like a lot of firms that had begun to raise ever-bigger funds with ever-bigger teams, the once-small firm closed its sixth fund with $750 million in capital commitments in 2008 before it was forced to scale back dramatically, closing its seventh fund with $282 million in 2013 with a whopping eight general partners (then parting ways with half of those individuals), closing its eighth fund with $325 million in late 2015 and doing what it could to right the ship.

It plainly pulled it off. Today, the firm is announcing that it has closed its ninth fund with $350 million in capital commitments and the smallest pool of active general partners it has had in years: Ashu Garg, who joined Foundation in 2008 after spending the previous four years at Microsoft; Charles Moldow, who joined the firm in 2005, after spending the previous five years as a senior vice president at TellMe Networks (later acquired by Microsoft); and Steve Vassallo, who joined the outfit in 2007 after spending a couple of years as a VP of product and engineering at a social network co-founded by Marc Andreessen, called Ning.

A fourth general partner with Foundation’s previous funds, Paul Holland, who joined Foundation in 2001, continues to manage out his investments.

Some notable exits were surely helpful for the trio, including the IPOs of Sunrun (2015), LendingClub (2014), TubeMogul (2014) and Chegg (2013). But we’re guessing Foundation’s newer bets intrigued limited partners even more.

Among some of the firm’s most interesting deals: the biomaterials company Bolt Threads, which is growing artificial spider silk and closed its Series D round last year; Fair, the fast-growing car subscription app that has already locked down at least $1.6 billion in equity and debt funding; and Cerebras, a next-generation silicon chip company that launched publicly last month after almost three years of quiet development, surprising many with its very large and very fast processor, which houses 1.2 trillion transistors, 18 gigabytes of on-chip memory and 400,000 processing cores across its 46,225 square millimeters.

In fact, the last was incubated at Foundation’s office, and it isn’t the only company to get its start with the help of the firm. Another example of a de novo investment is States Title, an insure-tech platform that was founded in 2016 and has gone on to raise $106.6 million, according to Crunchbase.

Starting from scratch is a “more repeatable and sustainable way of building ownership in a company,” explains Moldow. By “putting teams together with a bunch of ideas,” Foundation can “build companies from whole cloth” rather than “play the auction game where prices keep getting crazier and crazier.”

Foundation’s broader staff includes partner Joanne Chen, who joined Foundation in 2014 and focuses on enterprise and AI; partner Rodolfo Gonzalez, who joined the firm in 2013 and focuses on fintech, Latin America, and crypto; and the firm’s newest partner, Li Sun, who is helping to spearhead the firm’s frontier tech practice.

The firm tends to make between 10 and 12 new investments each year, writing checks from $6 million to $10 million typically as part of a Series A deal, though it will invest as little as a few thousand dollars in the right opportunity.

As for later-stage investments, the firm does not have an opportunity fund currently, nor does it assemble special purpose vehicles, which are basically pop-up funds that come together to make an investment in a single company. Instead, says Vassallo, it facilitates direct investments into companies for its limited partners.

We get the impression that could change at some point. Indeed, the new, smaller Foundation Capital seems very focused on trying out a lot of new things.

As Moldow says, “At one point, we had nine GPs and $750 million [in fresh capital to invest]. The evolution [to the firm’s current iteration] took a lot of work. At first it was, how do you fix this? In the last five to seven years, it has been, how do we excel at this?”

Pictured above, from left to right: Charles Moldow, Steve Vassallo and Ashu Garg.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Cerebras Systems Unveils 1.2 Trillion Transistor Wafer-Scale Processor for AI


This site may earn affiliate commissions from the links on this page. Terms of use.

Credit: Getty Images

Modern CPU transistor counts are enormous — AMD announced earlier this month that a full implementation of its 7nm Epyc “Rome” CPU weighs in at 32 billion transistors. To this, Cerebras Technology says: “Hold my beer.” The AI-focused company has designed what it calls a Wafer Scale Engine. The WSE is a square, approximately eight inches by nine inches, and contains roughly 1.2 trillion transistors.

I’m genuinely surprised to see a company bringing a wafer-scale product to market this quickly. The idea of wafer-scale processing has attracted some attention recently as a potential solution to performance scaling difficulties. In the study we discussed earlier this year, researchers evaluated the idea of building an enormous GPU across most or all of a 100mm wafer. They found that the technique could product viable, high-performance processors and that it could also scale effectively to larger node sizes. The Cerebras WSE definitely qualifies as lorge large — its total surface area is much larger than the hypothetical designs we considered earlier this year. It’s not a full-sized 300mm wafer, but it’s got a higher surface area than a 200mm does.

The largest GPU,SEEAMAZON_ET_135 See Amazon ET commerce just for comparison, measures 815 square millimeters and packs 21.1B transistors. So the Cerebras WSE is just a bit bigger, as these things go. Some companies send out pictures of their chips held up next to a diminutive common object, like a quarter. Cerebras sent out a photo of their die next to a keyboard.

cerebras-1-100808712-large

Not Pictured: PCIe x1600 slot.

As you can see, it compares fairly well.

The Cerebras WSE contains 400,000 sparse linear algebra cores, 18GB of total on-die memory, 9PB/sec worth of memory bandwidth across the chip, and separate fabric bandwidth of up to 100Pbit/sec. The entire chip is built on TSMC’s 16nm FinFET process. Because the chip is built from (most) of a single wafer, the company has implemented methods of routing around bad cores on-die and can keep its arrays connected even if it has bad cores in a section of the wafer. The company says it has redundant cores implemented on-die, though it hasn’t discussed specifics yet. Details on the design are being presented at Hot Chips this week.

The WSE — “CPU” simply doesn’t seem sufficient — is cooled using a massive cold plate sitting above the silicon, with vertically mounted water pipes used for direct cooling. Because there’s no traditional package large enough to fit the chip, Cerebras has designed its own. PCWorld describes it as “combining a PCB, the wafer, a custom connector linking the two, and the cold plate.” Details on the chip, like its raw performance and power consumption, are not yet available.

A fully functional wafer-scale processor, commercialized at scale, would be an exciting demonstration of whether this technological approach has any relevance to the wider market. While we’re never going to see consumer components sold this way, there’s been interest in using wafer-scale processing to improve performance and power consumption in a range of markets. If consumers continue to move workloads to the cloud, especially high-performance workloads like gaming, it’s not crazy to think we might one day see GPU manufacturers taking advantage of this idea — and building arrays of parts that no individual could ever afford to power cloud gaming systems in the future.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something