NVIDIA's challenger "plots" an IPO, with a valuation of more than 4 billion U.S.
The Pursuit of Nvidia Challengers.
According to the Information, Cerebras, an artificial intelligence chip developer challenging Nvidia, has secretly filed for an Initial Public Offering (IPO) with securities regulators.
The startup has indicated that it will issue preferred shares at a price significantly lower than its last private funding round. This move could make its stock more attractive to private investors before the IPO and to those participating in the public listing.
The IPO plan suggests that the eight-year-old company hopes to capitalize on the enthusiasm of investors for artificial intelligence hardware sales, which has made Nvidia the world's most valuable company and has boosted dozens of other stocks. Cerebras needs to demonstrate how it plans to gain market share in the AI server chip market and weaken Nvidia's control over it.
It is currently unclear about the financial performance of this startup. In a blog post in December, the company stated that it had recently achieved "cash flow break-even," but did not elaborate. Cerebras declined to comment. According to a document shared by the Prime Unicorn Index, in early June, Cerebras approved the issuance of about 27 million new shares at a price of $14.66 per share. This is significantly lower than the share price of $27.74 during the F round of financing in 2021, when the valuation exceeded $4 billion. The new share authorization indicates that Cerebras's valuation is about $2.5 billion.
Advertisement
It is currently unclear whether Cerebras has officially sold these shares, nor is it clear who will own them. It is common for companies to raise funds before an IPO, sometimes to consolidate the balance sheet before public investors consider buying shares.
Headquartered in Sunnyvale, California, Cerebras has previously raised $720 million from investors such as Coatue Management, Benchmark, and G42. G42 is an Abu Dhabi technology group that has invited Cerebras to build three supercomputers for it. According to a Bloomberg report in April, Cerebras has hired Citigroup as the lead bank for its initial public offering.Cerebras was established in 2016 and is one of the few startups aiming to compete with Nvidia. This startup offers chips specifically designed for training AI models and collaborates with organizations such as the Mohamed · Bin · Zayed University of Artificial Intelligence in Abu Dhabi and the Abu Dhabi healthcare company M42 to develop large language models tailored for Arabic or healthcare applications.
In addition to competing with Nvidia, chip developers also face challenges such as a shortage of chip components and ensuring that the chips keep pace with the rapid development of AI software. Nvidia has an inherent advantage in AI developers who are already accustomed to writing machine learning applications using Cuda (a Nvidia software that is only compatible with Nvidia chips).
In March of this year, Cerebras released its latest chip, the Wafer Scale Engine 3, which the company claims can train AI models ten times larger than the largest chips in the industry, such as OpenAI's GPT-4 or Google's Gemini. The company stated that the Mayo Clinic and AstraZeneca are among its customers.
The AI stock boom led by Nvidia has extended to other new shares. For example, Astera Labs, which sells data center components to companies like Amazon Web Services and Microsoft, saw its stock price rise by 76% on the first day of its public listing in March and has remained above its IPO price.
According to data collected by Caplight from broker-dealers processing existing stock bids, secondary market investors value Cerebras between $4.2 billion and $5 billion.
Competitors of Nvidia
Modular
Founded: 2022
Founders: Chris Lattner (CEO), Tim Davis (CPO)Investors: GV, Greylock Partners, Factory HQ, SV Angel
Equity Financing: $30 million
Modular is developing a development platform and coding language for training and running machine learning models. It allows users to choose from a range of AI tools, including the open-source software TensorFlow originally developed by Google and the open-source software PyTorch originally developed by Meta Platforms. Modular users can then run their models on a range of server chips from Nvidia to Intel to AMD.
MatX
Founded: 2022
Founders: Reiner Pope (CEO), Mike Gunter (CTO)
Investors: Outset Capital, SV Angel, Homebrew
Equity Financing: Unknown
MatX is developing LLM-specific chips for text applications. These chips are designed to run faster and at a lower cost than hardware such as Nvidia's GPUs, supporting a variety of AI applications, including image generation.
The founders of this startup are former Google employees. MatX's CEO, Reiner Pope, helped build the search giant's Pathways language model and wrote software to run the model efficiently. MatX's CTO, Mike Gunter, was involved in the development of Tensor Processing Units, Google's dedicated AI hardware that competes with Nvidia GPUs.According to the company's website, this California-based company in Mountain View has also received support from Palo Alto Networks' Chief Business Officer Amit Singh and researchers engaged in Large Language Model (LLM) research. It is currently unclear whether MatX has any customers, but the company stated that it has the backing of startups such as OpenAI and Anthropic. The founders did not respond to interview requests.
Qyber
Founded: 2022
Founders: Guillaume Verdon, Tom Hubregtsen
Investors: Hof Capital, Julian Capital
Equity Financing: Unknown
A person directly familiar with the situation revealed that Qyber is also developing a chip specifically designed to run Large Language Models (LLMs).
This stealth startup, founded by former researchers of X, focuses on quantum computing. The company's website is tight-lipped about its plans, with one page stating: "The next era of computing is not dead. It is alive." According to PitchBook, the company has raised funds from New York venture capital firm Hof Capital and San Francisco seed fund Julian Capital. A person with knowledge of the matter disclosed that recently, this startup has been in talks with potential investors for a new round of financing.
Rain Neuromorphics
Founded: 2017Founders: Gordon Wilson (Chief Executive Officer), Jack Kendall (Chief Technology Officer), Juan Nino (Chief Scientific Advisor)
Investors: Airbus Ventures, Baidu Ventures, FoundersX Ventures, Sam Altman, Daniel Gross
Equity Financing: $33 million
Rain Neuromorphics (now known as Rain AI) primarily targets the high costs associated with training and running machine learning models using traditional GPUs. These costs are partly due to the heat generated by the chips when transferring data from memory and processing components. This requires constant cooling of the GPUs, thereby increasing the power costs of data centers. Chief Executive Officer Gordon Wilson stated that Rain's chips and software combine memory and processing functions (very similar to the function of human synapses).
Tiny Corp
Founded: 2022
Founder: George Hotz
Investors: Undisclosed
Equity Financing: $5.1 million
Tiny Corp's products can help developers accelerate the process of training and running machine learning models. George Hotz, the founder and former CEO of the autonomous driving startup Comma AI, is building a business around a set of open-source deep learning tools called tinygrad. In a GitHub post in May, Hotz stated that he believes tinygrad can become a "strong competitor" to Pytorch, a set of deep learning tools introduced by Meta Platforms.SiMa.ai
Founded: 2018
Founder: Krishna Rangasayee (Chief Executive Officer)
Investors: Fidelity, Amplify Partners, Dell Technologies Capital, VentureTech Alliance
Equity Financing: $200 million
SiMa.ai is developing hardware and software to support AI software at the "edge" - in devices such as aircraft, drones, automobiles, and medical equipment, rather than in data centers. Krishna Rangasayee, who worked at the chipmaker Xilinx for nearly 20 years, founded SiMa.ai in 2018. He said in an interview that he hopes to meet the growing demand for new hardware in industries that cannot access cloud AI computing for various reasons.
Lightmatter
Founded: 2017
Founders: Nicholas Harris (Chief Executive Officer), Darius Bunandar, Thomas Graham
Investors: Matrix Partners, Spark Capital, Viking Global, SIP Global PartnersEquity Financing: $266 Million
Lightmatter moves data between chips and server clusters using lasers. This startup, founded by graduates of the Massachusetts Institute of Technology (MIT), is based on the university's patented technology. Co-founder and CEO Nicholas Harris stated that Lightmatter's products can help businesses reduce data center energy consumption costs by about 80% when training and running machine learning models. In contrast, chips from chip suppliers such as Nvidia, AMD, and Intel transmit data via wires. Harris said that Lightmatter also plans to license its technology to companies like Nvidia, AMD, and Intel for use in their own chips.
D-Matrix
Founded: 2019
Founders: Sid Sheth (CEO), Sudeep Bhoja
Investors: Microsoft, SK Hynix, Playground Global, Entrada Ventures
Equity Financing: $51 Million
D-Matrix is developing a chip and software specifically for running machine learning models that integrate processing and memory, which are typically separate and distinct components on a chip. D-Matrix Co-founder and CEO Sid Sheth said this approach results in less heat generation from D-Matrix's chips, thus requiring less cooling, making it a more cost-effective choice compared to the main suppliers' GPU and CPU chips. He said this is important as many companies are looking to build generative AI applications based on large language models (LLMs) like OpenAI's GPT-4. Sheth said D-Matrix chose to focus on inference (running machine learning models) rather than training these models because they believe these models will become larger and more expensive to run over time. The company has customers testing its chips and software and plans to commercialize them in the first half of next year.