Igor Babuschkin, co-founder of xAI, during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Tuesday, March 19, 2024.
David Paul Morris | Bloomberg | Getty Images
Igor Babuschkin, a founding member of Elon Musk’s xAI, said Wednesday that he’s leaving the artificial intelligence startup to launch his own venture firm.
“Today was my last day at xAI, the company that I helped start with Elon Musk in 2023,” Babuschkin wrote on X, which is owned by xAI. “I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed. Building AI that advances humanity has been my lifelong dream.”
Musk wrote, in response, “Thanks for helping build @xAI! We wouldn’t be here without you.”
Babuschkin said he’s starting Babuschkin Ventures to support AI safety research and invest in startups in “AI and agentic systems that advance humanity and unlock the mysteries of our universe.”
A former research engineer for Google’s DeepMind and ex-member of OpenAI’s technical staff, Babuschkin recounted some of xAI’s major operational achievements during his tenure, including building out engineering teams at the company.
“Through blood sweat and tears, our team’s blistering velocity built the Memphis supercluster, and shipped frontier models faster than any company in history,” he wrote.
The facility in Memphis processes data and trains the models that power xAI’s Grok chatbot.
Locals have protested xAI’s operations in Memphis, especially its use of natural gas-burning turbines to power its data centers. Emissions from the turbines are reportedly worsening the poor air quality in the West Tennessee city.
At the time he was preparing to go into business with Musk, Babuschkin wrote that he believed “very soon AI could reason beyond the level of humans,” and was concerned about making sure such technology is “used for good.”
He said that, “Elon had warned of the dangers of powerful AI for years,” and shared his vision of “AI used to benefit humanity.”
The company has a rocky track record when it comes to AI safety.
In May, xAI’s Grok chatbot automatically generated and spread false posts about alleged “white genocide” in South Africa. After that, the company apologized and said Grok’s strange behavior was caused by an “unauthorized modification” to the chatbot’s system prompts, which help inform the way it behaves and interacts with users.
In July, xAI found itself apologizing for another problem with Grok. After a code update, the chatbot automatically generated and spread false and antisemitic content across X, including posts praising Adolf Hitler.
The European Union requested a meeting last month with representatives from xAI to discuss problems with X and the integrated Grok chatbot.
Babuschkin and xAI didn’t respond to requests for comment.
Other chatbots have also generated false or otherwise dangerous outputs in response to queries. OpenAI’s ChatGPT was recently called out for giving bad health advice to a user who wound up in the emergency room. And Google had to make changes to Gemini last year after it generated offensive images in response to user prompts about history, including images that depicted people of color as Nazis.
WATCH: xAI explains Grok’s ‘white genocide’ posts
