Short statement about the imminent emergence of artificial general intelligence
Herbert Roitblat
Artificial intelligence, data science, eDiscovery
January 7, 2025
Sam Altman announced recently (
https://blog.samaltman.com/reflections) that “We are now confident we know how to build AGI as we have traditionally understood it.” He may be confident, but I doubt very seriously that they do, in fact, know much of anything about accomplishing artificial general intelligence (AGI). I have just finished a paper on the topic and while waiting for it to appear, I did want to respond to Altman’s claim.
The way we have traditionally understood AGI, it means what Newell and Simon talked about in 1958: “It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
Current AI models of practically every flavor are focused on well-structured problems. They are given a space of parameters and a tool for finding a configuration of that space that solves the problem. The core of the problem solving is provided by humans.
What humans contribute to solving GenAI problems:Training data
Number of neural network layers
Types of layers
Connection patterns
Activation functions
Training regimen for each layer
Number of attention heads
Parameter optimization method
Context size
Representations of words as tokens and vectors
Training task
Selection of problems to solve
Training progress measures and criteria
Human feedback for reinforcement learning
Rules for modifying parameters as a result of human feedback
Prompt
Temperature and other meta-parameters
What the machine contributes to solving GenAI problems:Parameter adjustments through gradient descent
ChatGPT and other transformer based models are also highly dependent on humans to create prompts. This human contribution is rarely acknowledged, but there would be no semblance of intelligence without it. All of this human contribution is anthropogenic debt, akin to technical debt. It will have to be resolved before a system can be autonomous. For now, and for the foreseeable future, there is no machine intelligence without human intelligence.
GenAI models are trained to fill in the blanks, a task invented by human designers. There is no theory for how one gets from a fill-in-the-blanks machine to cognition. In the absence of a theory, attributing cognition to emergence with scale is nothing more than wishful thinking. It is play acting at science.
The attribution of cognition to current models is based on a logical fallacy (affirming the consequent). The fact that a model succeeds at a test says nothing about how it succeeded. Did it succeed by being a stochastic parrot? By raw association? By narrow problem solving through parameter adjustment? Success does not allow one to select if any of these is true. Finding that cookies are missing from the cookie jar, does not tell us who took them.
Natural problems are not structured in a way that today’s machines can solve them. Among the biggest problems we face as a society is how to eliminate poverty, for example. We do not know what the parameters are that would enable us to solve this problem, let alone how to adjust them.
When Einstein wrote about the equivalence of energy and matter, his idea was contrary to the general thinking of the time. It was revolutionary. Today’s models can parrot language patterns that have been included in their training set, but not produce insights that are contrary to those patterns.
These are just a few of the reasons why I doubt that we are on the threshold of general intelligence. These concerns are rarely even recognized, but unless they are addressed through new insights, discoveries, and inventions, there is no chance of achieving artificial general intelligence.