Meta has been snapping up AI training chips and building out data centers in order to create a more powerful new chatbot it hopes will be as sophisticated as OpenAI’s GPT-4, according to The Wall Street Journal. The company reportedly plans to begin training the new large language model early in 2024, with CEO Mark Zuckerberg evidently pushing for it to once again be free for companies to create AI tools with.
The Journal writes that Meta has been buying more Nvidia H100 AI-training chips and is beefing up its infrastructure so that, this time around, it won’t need to rely on Microsoft’s Azure cloud platform to train the new chatbot. The company reportedly assembled a group earlier this year to build the model, with the goal of speeding up the creation of AI tools that can emulate human expressions.
That goal feels like a natural extension of rumored generative AI features Meta has already been working on. A June leak claimed there was an Instagram chatbot with 30 personalities being tested, which sounds a lot like the unannounced AI “personas” the company is said to be launching this month.
Meta has reportedly dealt with heavy AI researcher turnover over computing resources split between multiple LLM projects this year. It also faces heavy competition in the generative AI space. OpenAI said in April that it wasn’t training a GPT-5 and “won’t for some time,” but Apple has reportedly been dumping millions of dollars daily into its own “Ajax” AI model that it apparently thinks is more powerful than even GPT-4. Google and Microsoft have each been expanding the use of AI in their productivity tools and Google wants to use generative AI in Google Assistant. Amazon also has generative AI initiatives underway across its organization that could yield a chatbot-powered Alexa.
LeackStat 2023
2024 © Leackstat. All rights reserved