Why OpenAI needs another $6.6 billion in VC money
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. OpenAI raises another $6.6 billion at a $157 billion valuation OpenAI’s product lineup and cast of executives are quickly evolving, but one thing has stayed the same–the company’s need for a lot of cash. The company’s main innovation—putting massive computing power behind generative AI models—is, by definition, expensive, given the high cost of server time and specialized chips, not to mention the expensive PhDs needed to run it all. Now the company has added another $6.6 billion to its estimated $13.5 billion in funding. Its investors believe the company to be worth $157 billion, up from its $86 billion valuation in February. The round was led by venture-capital firm Thrive Capital ($1.25 billion) and Microsoft ($1 billion), the Wall Street Journal reports. Softbank and Nvidia invested for the first time. OpenAI asked the investors not to invest in its close rivals: Anthropic, Elon Musk’s xAI, and Safe Superintelligence, cofounded by OpenAI’s former chief scientist Ilya Sutskever. OpenAI is making revenue through its API fees and ChatGPT subscriptions, but it’s far from profitable. The new money will reportedly pay for a year of runway for the company to begin generating returns on investments. The new funding comes on the heels of the departure of CTO Mira Murati, who is the latest in a string of OpenAI researchers and executives to resign this year. Also, the company held its developer event this week, in which it announced the ability for developers to essentially build ChatGPT’s Advanced Voice Mode into their own apps. Perhaps most importantly, OpenAI changed the architecture of its models with its newest “o1” line. The models are designed to work through complex multistep problems. The way they do this is by trying multiple lines of reasoning concurrently, judging the answers, then selecting the best one. This entails generating a lot more tokens at inference time, which requires a lot more compute power. OpenAI gets some of that compute power from Microsoft’s Azure cloud, but it will have to build more clusters of its own and make investments in chips that are optimized for its workloads. California’s vetoed AI bill won’t be the last to focus on biggest AI models Earlier this week, California Governor Gavin Newsom vetoed legislation that would have imposed safety and transparency requirements on developers of large frontier AI models such as Meta’s Llama and OpenAI’s GPT-4o. Newsom said that by focusing only on the largest models, regulators might overlook the risks of smaller models deployed in high-risk environments or that do critical decision-making or use sensitive data. But even though the bill, SB 1047, ultimately failed, it may be the most high-profile and hotly contested AI bill the U.S. has seen to date. “[T]he debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage,” the bill’s lead author, State Senator Scott Wiener, said in a statement responding to Newsom’s veto. “Major AI labs were forced to get specific on the protections they can provide to the public through policy and oversight.” Some believe the bill’s popularity among lawmakers and the public will embolden other states to act. “With California’s tech industry, it’s not surprising that they were the first to attempt to pass legislation like this, and it wouldn’t surprise me if other states make similar attempts,” says Brian O’Neill, a computer science professor at Quinnipiac University. In some cases, that’s already happening. Colorado, for example, enacted in May a “comprehensive” AI law, SB24-205, which aims to protect consumers from AI systems making biased decisions about them. The bill, which takes effect in 2026, is similar to SB 1047 in that it imposes safety and transparency requirements on AI model developers, but different in that its main focus is to prevent immediate, specific harms such as AI bias (as opposed to future catastrophic harm to the public). And the California legislature is bound to be very active next session. “I expect several dozen AI-related bills to emerge out of the California legislature alone in the next legislative session,” says Dean Ball, research fellow with the Mercatus Center, “just as there were several dozen in this term [17 of which were signed by the Governor]. Many of those are likely to be focused on present-day risks as opposed to catastrophic risks from frontier AI systems—but undoubtedly, some will be focused on frontier AI regulation.” Marc Andreessen: The AI purists will win Influential VC Marc Andreessen said this week that his firm, Andreessen Horowitz, sees AI as something more than a technological shift on the order of the arrival of the internet or mobile computing. It’s more like a completely different kind
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
OpenAI raises another $6.6 billion at a $157 billion valuation
OpenAI’s product lineup and cast of executives are quickly evolving, but one thing has stayed the same–the company’s need for a lot of cash. The company’s main innovation—putting massive computing power behind generative AI models—is, by definition, expensive, given the high cost of server time and specialized chips, not to mention the expensive PhDs needed to run it all.
Now the company has added another $6.6 billion to its estimated $13.5 billion in funding. Its investors believe the company to be worth $157 billion, up from its $86 billion valuation in February. The round was led by venture-capital firm Thrive Capital ($1.25 billion) and Microsoft ($1 billion), the Wall Street Journal reports. Softbank and Nvidia invested for the first time.
OpenAI asked the investors not to invest in its close rivals: Anthropic, Elon Musk’s xAI, and Safe Superintelligence, cofounded by OpenAI’s former chief scientist Ilya Sutskever.
OpenAI is making revenue through its API fees and ChatGPT subscriptions, but it’s far from profitable. The new money will reportedly pay for a year of runway for the company to begin generating returns on investments. The new funding comes on the heels of the departure of CTO Mira Murati, who is the latest in a string of OpenAI researchers and executives to resign this year. Also, the company held its developer event this week, in which it announced the ability for developers to essentially build ChatGPT’s Advanced Voice Mode into their own apps.
Perhaps most importantly, OpenAI changed the architecture of its models with its newest “o1” line. The models are designed to work through complex multistep problems. The way they do this is by trying multiple lines of reasoning concurrently, judging the answers, then selecting the best one. This entails generating a lot more tokens at inference time, which requires a lot more compute power. OpenAI gets some of that compute power from Microsoft’s Azure cloud, but it will have to build more clusters of its own and make investments in chips that are optimized for its workloads.
California’s vetoed AI bill won’t be the last to focus on biggest AI models
Earlier this week, California Governor Gavin Newsom vetoed legislation that would have imposed safety and transparency requirements on developers of large frontier AI models such as Meta’s Llama and OpenAI’s GPT-4o. Newsom said that by focusing only on the largest models, regulators might overlook the risks of smaller models deployed in high-risk environments or that do critical decision-making or use sensitive data.
But even though the bill, SB 1047, ultimately failed, it may be the most high-profile and hotly contested AI bill the U.S. has seen to date.
“[T]he debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage,” the bill’s lead author, State Senator Scott Wiener, said in a statement responding to Newsom’s veto. “Major AI labs were forced to get specific on the protections they can provide to the public through policy and oversight.”
Some believe the bill’s popularity among lawmakers and the public will embolden other states to act. “With California’s tech industry, it’s not surprising that they were the first to attempt to pass legislation like this, and it wouldn’t surprise me if other states make similar attempts,” says Brian O’Neill, a computer science professor at Quinnipiac University.
In some cases, that’s already happening. Colorado, for example, enacted in May a “comprehensive” AI law, SB24-205, which aims to protect consumers from AI systems making biased decisions about them. The bill, which takes effect in 2026, is similar to SB 1047 in that it imposes safety and transparency requirements on AI model developers, but different in that its main focus is to prevent immediate, specific harms such as AI bias (as opposed to future catastrophic harm to the public).
And the California legislature is bound to be very active next session. “I expect several dozen AI-related bills to emerge out of the California legislature alone in the next legislative session,” says Dean Ball, research fellow with the Mercatus Center, “just as there were several dozen in this term [17 of which were signed by the Governor]. Many of those are likely to be focused on present-day risks as opposed to catastrophic risks from frontier AI systems—but undoubtedly, some will be focused on frontier AI regulation.”
Marc Andreessen: The AI purists will win
Influential VC Marc Andreessen said this week that his firm, Andreessen Horowitz, sees AI as something more than a technological shift on the order of the arrival of the internet or mobile computing. It’s more like a completely different kind of computing, he said during an on-stage interview Tuesday with Anyscale’s Robert Nishihara at the Ray Summit in San Francisco. Where traditional computers are deterministic (they follow explicit orders in code), neural networks are “probabilistic,” meaning they can do a wider array of things based on less specific instructions, using math to find the most likely answer. And you might get a different answer every time you ask the same question.
Andreessen said that the next wave of startups will apply this kind of computing to all kinds of business and life tasks—an AI travel agent might act on our behalf to plan and book a vacation, for example. Andreessen says startups that use AI to build their product from the start will win out against companies that try to build AI onto an older product that doesn’t fully reimagine the task. “That’s like adding flour to a cake after it’s already been baked,” he said. Andreessen says they have a term at his firm, the “sixth bullet,” which refers to the bullet point companies often add to their pitch decks to show that their product leverages AI.
More AI coverage from Fast Company:
- AppliedXL and Associated Press team up to use AI to sift through federal regulations
- Why concerns about OpenAI’s new logo are about more than design
- Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food
- 10 ways AI can make your programming job easier—and more efficient
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.