Categories Technology

Will 2026 Be the Year That the AI Industry Stops Crowing About ‘AGI’?

Up until now, the stated end goal for companies developing artificial intelligence (AI) products has almost universally been to achieve artificial general intelligence (AGI)—an ill-defined ambition that can best be summarized as a hypothetical AI that’s capable of matching and surpassing the cognitive abilities of humans. But now that we’ve basically bet the entire economy on hitting that benchmark and earmarked literally trillions of dollars to resource-sucking data centers with the express intent of providing the processing power needed to build the god-machine, the industry is suddenly, collectively backing off the promise.

This (frankly, pretty predictable) turn started last year. Back in August, OpenAI CEO Sam Altman said AGI is “not a super useful term,†which also seemed to play a role in his company pivoting to talk about developing an AI capable of autonomous research rather than mentioning AGI. That was noteworthy given that OpenAI was technically the only company with a formal definition of AGI: an AI system that can generate at least $100 billion in profits, per leaked internal documents.

Others around the industry have also started to throw cold water on the AGI concept. Salesforce CEO Marc Benioff, a guy so obsessed with AI that he’s considered changing the name of his company to reflect his undying affinity for the technology, described AGI as marketing “hypnosis†and said he’s “extremely suspect†of anyone who hypes it. Dario Amodei, CEO of Anthropic, said he’s “always disliked†the term AGI. Just recently, Anthropic President Daniela Amodei said AGI is an “outdated†term. Microsoft CEO Satya Nadella has gone so far as to say he doesn’t think “AGI as defined, at least by us in our contract, is ever going to be achieved anytime soon,†and said any self-declared AGI achievement is just “benchmark hacking,†which is funny considering Microsoft was integral to crafting OpenAI’s money-generating definition of AGI.

Some of the posturing from the industry on this whole shift away from AGI is being positioned as researchers simply having even loftier goals in mind—as if AGI is too limiting to truly describe what AI at its maximum capacity is capable of. But there’s a simpler explanation for this shift in language used to describe the AI end goal: large language models, the technology that most major AI companies have poured endless amounts of money and data into in order to achieve some form of general intelligence, simply are not capable of actually reaching that benchmark.

That’s the conclusion that critics of the AI industry have had for some time now. People like Gary Marcus, a noted AI skeptic, have said, “pure scaling will not get us to AGI.†Similar conclusions can be found in recent research, including a paper from Apple that concludes LLMs are likely not capable of achieving AGI and a study from the Data Mining and Machine Learning Lab that concludes “chain of thought reasoning†in LLMs is “a mirage.†That suggests AGI isn’t just a bad metric because it’s hard to define; it’s likely not one that’s achievable with these dumb bots.

Original Source: https://gizmodo.com/will-2026-be-the-year-that-the-ai-industry-stops-crowing-about-agi-2000707012

Original Source: https://gizmodo.com/will-2026-be-the-year-that-the-ai-industry-stops-crowing-about-agi-2000707012

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *