We’ve entered the AI grift era
For years, Y Combinator has been revered as a highly selective startup accelerator with a hit rate that far exceeds the law of averages. But one of its latest companies has raised a few eyebrows, and prompted questions about whether AI grifters have caught up with the innovators. PearAI, an AI-powered code editor, launched last week as part of the Y Combinator program. Its founder, Duke Pan, posted on X that he left a lucrative job to cofound the company, and that PearAI is in fact a fork—or clone—of two preexisting tools: VS Code and Continue. So far, all legal, though perhaps questionable. (But at least Pan admitted his product is a fork—and Pear’s FAQ highlights 11 differences between itself and Continue.) However, Pan admitted in a subsequent post within hours of the initial announcement that Pear altered the open-source license of Continue, creating instead a closed-source license—something that is seen as a no-no. That’s the formal, journalistic summary of what he said. Here’s what he actually posted on X: “dawg i chatgpt’d the license, anyone is free to use our app for free for whatever they want. if there’s a problem with the license just lmk i’ll change it. we busy building rn can’t be bothered with legal.” The flippancy with which the issue of taking an open-source, free-to-access tool and reskinning it to join Y Combinator, then offering a blasé explanation, is certainly jarring, and maybe damning for Pan’s project. But there’s an even darker element to the whole ordeal: It shows what happens when a fast-growing, unchecked industry is buoyed by a boatload of cash. Funding for AI firms reached $23.2 billion in the second quarter of the year—the highest level on record, according to analyst firm CB Insights. The number of deals brokered also hit a rebound from previous quarters of decline, with nearly 950 deals recorded by CB Insights in that three-month period. There’s no suggestion that Pan or his cofounder sought to deliberately mislead people about Pear, and since their mistake has been identified, they’ve taken action to remediate it. Pan later posted a remorseful message on X admitting they’d “screwed up.” But there’s still the broader problem. Demis Hassabis, chief executive of Google’s AI research team, told the Financial Times earlier this year that he worries that the cash flooding the AI space “brings with it a whole attendant bunch of hype and maybe some grifting.” Hassabis, who cofounded DeepMind, compared the present AI space unfavorably to crypto. High-profile failures of AI tech, such as the Rabbit R1 or Humane Ai Pin, which were meant to bring generative AI to the masses, both fell flat. Even the biggest companies in the space have been criticized for overpromising and underdelivering: OpenAI touted its GPT-4o model in mid-May at a demo day, saying that “voice and video” interaction in real time would be arriving for users “in the coming weeks.” And while the ability to use advanced voice mode has, as of this week, been included in ChatGPT updates, the video functionality presented at that May demo day has yet to materialize. Things fall through and people make mistakes, to be sure. Grand ideas don’t always work out, while others sometimes get cast to the wayside. But early impressions count for any technology. It’s why ChatGPT, despite wowing many, has not seen the widespread adoption that early hype-fueled estimates believed it might. After reaching 100 million monthly active users in two months, it took nearly two years to get to 200 million. People saw the initial version of the chatbot, saw that it was a fun game, but perhaps didn’t realize the utility it could have in their lives—and are not easily convinced to try it again. And with a technology touted to be such a game-changer as generative AI, it’s vitally important that companies don’t just talk the talk, but also walk the walk. Even if it means forgoing a chance for funding at the expense of being more realistic—or honest—about where they’ve come from, and where they’re going.
For years, Y Combinator has been revered as a highly selective startup accelerator with a hit rate that far exceeds the law of averages. But one of its latest companies has raised a few eyebrows, and prompted questions about whether AI grifters have caught up with the innovators.
PearAI, an AI-powered code editor, launched last week as part of the Y Combinator program. Its founder, Duke Pan, posted on X that he left a lucrative job to cofound the company, and that PearAI is in fact a fork—or clone—of two preexisting tools: VS Code and Continue. So far, all legal, though perhaps questionable. (But at least Pan admitted his product is a fork—and Pear’s FAQ highlights 11 differences between itself and Continue.)
However, Pan admitted in a subsequent post within hours of the initial announcement that Pear altered the open-source license of Continue, creating instead a closed-source license—something that is seen as a no-no.
That’s the formal, journalistic summary of what he said. Here’s what he actually posted on X: “dawg i chatgpt’d the license, anyone is free to use our app for free for whatever they want. if there’s a problem with the license just lmk i’ll change it. we busy building rn can’t be bothered with legal.”
The flippancy with which the issue of taking an open-source, free-to-access tool and reskinning it to join Y Combinator, then offering a blasé explanation, is certainly jarring, and maybe damning for Pan’s project. But there’s an even darker element to the whole ordeal: It shows what happens when a fast-growing, unchecked industry is buoyed by a boatload of cash.
Funding for AI firms reached $23.2 billion in the second quarter of the year—the highest level on record, according to analyst firm CB Insights. The number of deals brokered also hit a rebound from previous quarters of decline, with nearly 950 deals recorded by CB Insights in that three-month period.
There’s no suggestion that Pan or his cofounder sought to deliberately mislead people about Pear, and since their mistake has been identified, they’ve taken action to remediate it. Pan later posted a remorseful message on X admitting they’d “screwed up.”
But there’s still the broader problem. Demis Hassabis, chief executive of Google’s AI research team, told the Financial Times earlier this year that he worries that the cash flooding the AI space “brings with it a whole attendant bunch of hype and maybe some grifting.” Hassabis, who cofounded DeepMind, compared the present AI space unfavorably to crypto. High-profile failures of AI tech, such as the Rabbit R1 or Humane Ai Pin, which were meant to bring generative AI to the masses, both fell flat.
Even the biggest companies in the space have been criticized for overpromising and underdelivering: OpenAI touted its GPT-4o model in mid-May at a demo day, saying that “voice and video” interaction in real time would be arriving for users “in the coming weeks.” And while the ability to use advanced voice mode has, as of this week, been included in ChatGPT updates, the video functionality presented at that May demo day has yet to materialize.
Things fall through and people make mistakes, to be sure. Grand ideas don’t always work out, while others sometimes get cast to the wayside. But early impressions count for any technology. It’s why ChatGPT, despite wowing many, has not seen the widespread adoption that early hype-fueled estimates believed it might. After reaching 100 million monthly active users in two months, it took nearly two years to get to 200 million. People saw the initial version of the chatbot, saw that it was a fun game, but perhaps didn’t realize the utility it could have in their lives—and are not easily convinced to try it again.
And with a technology touted to be such a game-changer as generative AI, it’s vitally important that companies don’t just talk the talk, but also walk the walk. Even if it means forgoing a chance for funding at the expense of being more realistic—or honest—about where they’ve come from, and where they’re going.