Last month, or the month before (it’s all a bit of blur), I started programming what I thought could be a general purpose AI engine. And it works! It can find any pattern that is computational, and thus solve any computationally defined problem. But it’s unfortunately completely inefficient for most interesting tasks. If it wanted to learn to play chess, it would try to solve the entire game. While mathematically possible, it would take far too long to compute and take up way too much memory to be of any use, what with combinatorial explosions and all. And I don’t even know how to define a creative task, such as drawing or storytelling, in any computationally useful way. So I really didn’t achieve much.
But the seeds of obsession were planted. How does the human mind do it? What am I missing? There must be an answer, because humans do it. This is the “AGI problem” – AGI standing for “artificial general intelligence” – the elusive AI system that can do anything, not just model a solution to some specific traditionally-cognitive task (which is what most of the “AI field” focuses on).
While I knew nobody had the answer (at least not that they’re revealing, otherwise we’d be living in a very different world), a trip to the bookstore seemed like a good place to start. And there I found David Deutsch’s recent book: The Beginning of Infinity: Explanations That Transform the World.
It’s a fascinating book, one of the most fascinating books I’ve ever read really, even though it doesn’t give me any of the answers I’m looking for (Deutsch obviously makes no claim to have solved the AGI problem). At the heart of it, Deutsch argues that it’s our human ability to create explanations that gives us the ability to think about all the things we do and make the sort of progress we do. Of course, we’re still left with the question: how do we create explanations? How can we program computers to do the same?
To quote Deutsch from this also fascinating article:
AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.
The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.
Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.
Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.
So I’m in search of one of the best ideas ever.