The hype around advanced artificial intelligence (AI) is starting to fizzle out a bit. OpenAI’s latest language model, Orion, isn’t living up to the high expectations set by its predecessors like GPT-4 and GPT-3. Reports from Bloomberg and The Information suggest that the model is underperforming, especially in areas like coding.
But OpenAI isn’t alone in facing setbacks. Google’s Gemini model and Anthropic’s Claude 3.5 Opus are also falling short of their anticipated performance levels. This trend of diminishing returns across the AI industry indicates that the current approach of simply making models bigger and more powerful may not be sustainable in the long run.
Margaret Mitchell, chief ethics scientist at Hugging Face, believes that a shift in training approaches may be necessary to achieve human-like levels of intelligence and versatility in AI. The industry’s heavy reliance on scaling — increasing model size and training data — is becoming costly and unsustainable. Companies are now facing challenges in obtaining high-quality datasets without human input, especially in language-related tasks.
The costs associated with building and running cutting-edge AI models are skyrocketing. Anthropic CEO Dario Amodei estimates that by 2027, these models could cost over $10 billion each to develop. With Opus and Gemini facing performance issues and limited advancements, it seems like the AI industry’s rapid progress may be slowing down.
As Noah Giansiracusa, a mathematics professor, points out, the era of rapid AI advancements may have been short-lived and unsustainable. The industry now faces the challenge of finding new approaches to AI development that can deliver significant breakthroughs without breaking the bank.
In summary, the AI bubble may be losing some of its air as companies grapple with the limitations of current scaling strategies. It’s clear that a new direction is needed to propel AI innovation forward in a sustainable and cost-effective manner.