The underlying technology behind most of the widely available artificial intelligence models is large language models, a form of machine learning and language processing. The bet that most AI companies are making is that LLMs, if fed enough data, will achieve something like full autonomy to think and function in ways similar to humans—but with even more collective knowledge. It turns out, betting on infinite growth might not have great odds of paying off. A new study claims to show mathematical proof that “LLMs are incapable of carrying out computational and agentic tasks beyond a certain complexity.â€
The paper, published by father and son researchers Vishal Sikka and Varin Sikka and surfaced recently by Wired after its initial publication flew under the radar, has a pretty simple conclusion, though there’s quite a bit of complicated math to reach it. Distilled as simply as possible, it reasons that certain prompts or tasks provided to an LLM will require a more complex computation than what the model is capable of processing, and when that happens, the model will either fail to complete the requested action or will incorrectly carry out the task.
The basic premise of the research really pours some cold water on the idea that agentic AI, models that are able to be given multi-step tasks that are completed completely autonomously without human supervision, will be the vehicle for achieving artificial general intelligence. That’s not to say that the technology doesn’t have a function or won’t improve, but it does place a much lower ceiling on what is possible than what AI companies would like to acknowledge when giving a “sky is the limit†pitch.
The researchers aren’t the first to suggest LLMs may not be all they’re cracked up to be, though their research does put real math behind the sense that many AI skeptics have expressed. Last year, researchers at Apple published a paper that concluded that LLMs are not capable of actual reasoning or thinking, despite creating the appearance of doing so. Benjamin Riley, founder of the company Cognitive Resonance, wrote last year that because of how LLMs work, they will never truly achieve what we consider to be “intelligence.†Other studies have tested the limits of LLM-powered AI models to see if they are capable of producing novel creative outputs, with pretty uninspiring results.
But if none of that is convincing and elaborate mathematical equations are more your thing, then the study from the Sikkas may be the proof you need. All of it is part of a mounting body of evidence that suggests that whatever AI may be capable of in its current form, it almost certainly won’t be the technology that will surpass human intelligence by the end of this year, as Elon Musk recently claimed.
Original Source: https://gizmodo.com/ai-agents-are-poised-to-hit-a-mathematical-wall-study-finds-2000713493
Original Source: https://gizmodo.com/ai-agents-are-poised-to-hit-a-mathematical-wall-study-finds-2000713493
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
