Strong AI

AI generated movie trailer fails to impress

IBM’s Watson supercomputer AI has created a trailer for an AI horror film! Oh my! How interesting! How ironic! How impressive! IBM is full of geniuses! Let’s watch!

Erm… ok…

Alas, I am not at all impressed with the result. This trailer tells me hardly anything about the story. I fear we’ll have to wait until AIs actually “understand” language and story (or at least analyze these elements a bit more closely) before they can create trailers that resonate with humans. Who are the characters? What’s the main conflict of the story? What’s the spiritual (inner) conflict? What’s the hook? Etc. Trailers are not just a collection of tone shifts. What stupid investors are investing in IBM based on this sort of nonsense? (And how can I get some of their money myself?)

Anyway, what we end up with is not so much a “movie trailer created by AI” as though “AI” were some generic mysterious black box. Rather, it’s a movie trailer created in some algorithmic fashion that a human (or group of humans) designed. Which, of course, is what all “AI-generated” products amount to — human-created algorithms to mimic and automate processes we may not necessarily understand.

And therein lies the true goal of “AI research”. The point is not to create a robot that can do everything a human can do but remains just as mysterious as a human brain. The point is to understand what intelligence actually is in the first place. And when we understand that, we may find we don’t need or care about sophisticated human-like robots anyway. And any sort of creepy fear that comes from wondering about the possibilities of rogue robots or the nature of digital consciousness is the result of human idiocy, spiritually and logically. Spiritually in that consciousness is not merely an emergent property of matter (we are not just meat robots). Logically in that if we could design a robot capable of “going rogue” then we can just as easily design it to not “go rogue” in the first place.

“What if the AIs kill us?!” It’s already not that hard to make a machine that can kill you; why is a robot doing it somehow more scary? I suppose because you don’t understand where the “impulse” to kill is coming from. And anyway, if we’re smart enough to create robots that can actually decide to kill in some humanly way, then we’d naturally understand where that decision comes from in the first place and would prevent it (or override the capacity to decide not to kill if we’re making an evil robot army I guess).

(Of course some AI research is perfectly happy to stay within the bounds of mimicking and automating thought processes, as these algorithms can have useful applications, such as handwriting recognition software or my own forays into algorithmic music generation, which is ultimately music theory research.)

And let us not soon forget the actual screenplay written by an artificial neural network:

And the Oscar goes to…

By S P Hannifin, ago
Programming

In Search of Strong AI

While trying to work on my novel, my mind sometimes turns to mush and I can’t think creatively, at least not in the way that novel-writing calls for. So I began a journal with which to chronicle my thoughts and explorations as I search for Strong AI. I would love to live to see Strong AI achieved; who wouldn’t?

My short term goal, however, is to create a computer program that can teach itself to play chess (or any rule-based game) in such a way that we can observe the rules that it learns. As far as I know, no one has achieved this. Chess engines focus on number-crunching algorithms, using the computer’s ability to calculate quickly to its advantage rather than trying to simulate how a human would learn things. But if we can figure out how a human learns the game, I think the algorithms involved would be far more useful to advancing human knowledge than number-crunching algorithms created specifically for the game. I want an algorithm that creates algorithms.

Anyway, I have written up my explorations so far in my new little journal. You can download a PDF of the journal here. It’s a bit too long and clunky to post as a blog entry. I hope that as I continue to explore the subject, I will write and upload more journal entries.

Not sure anybody else out there is interested in the subject, but I’ll put it out there in case anyone is curious. Join me, and together we will rule the world.

InSearchOfStrongAI-Part01.pdf

By S P Hannifin, ago
Non-fiction books

In search of the best idea ever

Last month, or the month before (it’s all a bit of blur), I started programming what I thought could be a general purpose AI engine.  And it works!  It can find any pattern that is computational, and thus solve any computationally defined problem.  But it’s unfortunately completely inefficient for most interesting tasks.  If it wanted to learn to play chess, it would try to solve the entire game.  While mathematically possible, it would take far too long to compute and take up way too much memory to be of any use, what with combinatorial explosions and all.  And I don’t even know how to define a creative task, such as drawing or storytelling, in any computationally useful way.  So I really didn’t achieve much.

But the seeds of obsession were planted.  How does the human mind do it?  What am I missing?  There must be an answer, because humans do it.  This is the “AGI problem” – AGI standing for “artificial general intelligence” – the elusive AI system that can do anything, not just model a solution to some specific traditionally-cognitive task (which is what most of the “AI field” focuses on).

While I knew nobody had the answer (at least not that they’re revealing, otherwise we’d be living in a very different world), a trip to the bookstore seemed like a good place to start.  And there I found David Deutsch’s recent book: The Beginning of Infinity: Explanations That Transform the World.

tboi

It’s a fascinating book, one of the most fascinating books I’ve ever read really, even though it doesn’t give me any of the answers I’m looking for (Deutsch obviously makes no claim to have solved the AGI problem).  At the heart of it, Deutsch argues that it’s our human ability to create explanations that gives us the ability to think about all the things we do and make the sort of progress we do.  Of course, we’re still left with the question: how do we create explanations?  How can we program computers to do the same?

To quote Deutsch from this also fascinating article:

AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

So I’m in search of one of the best ideas ever.

By S P Hannifin, ago
Technology

Computer plays Jeopardy, but does not know how to love

According to this article on Engadget:

So, in February IBM’s Watson will be in an official Jeopardy tournament-style competition with titans of trivia Ken Jennings and Brad Rutter. That competition will be taped starting tomorrow, but hopefully we’ll get to know if a computer really can take down the greatest Jeopardy players of all time in “real time” as the show airs. It will be a historic event on par with Deep Blue vs. Garry Kasparov, and we’ll absolutely be glued to our seats. Today IBM and Jeopardy offered a quick teaser of that match, with the three contestants knocking out three categories at lightning speed. Not a single question was answered wrongly, and at the end of the match Watson, who answers questions with a cold computer voice, telegraphing his certainty with simple color changes on his “avatar,” was ahead with $4,400, Ken had $3,400, and Brad had $1,200.

This is kind of interesting because what makes a computer good at Jeopardy is the opposite of what makes a human good at Jeopardy.

A computer can easily store vast amounts of data, but cannot so easily process human language.

A human can easily understand language, but we can’t easily store vast amounts of data. After all, the entire point of Jeopardy is not understanding the question, but knowing data that most humans don’t use in every day life.

So I think the real achievement here is in language processing — being able to output a specific answer based really only on an incoming string of letters (or maybe sound waves).

It’s easy to understand how such an achievement could be useful: imagine being able to type a question into Google and getting a direct answer (or at least a direct guess) instead of just a bunch of webpages that make you search for the answer yourself. Even though searching for the answer yourself doesn’t always take that much time, getting a direct answer would be much more convenient. Or imagine being able to speak a question into your phone or your car’s dashboard while driving, when you can’t browse the web without risking death, and having it speak back a direct answer. Imagine being able to cheat easily while you’re playing trivia games with your friends who are judging your intelligence and value as a friend based on how many useless random things you know.

While this would be nice technology for us to have, it still doesn’t have the power to create so much, does it? When will we have computers that can formulate their own sentences? That can write metaphors? That can write entire books? I guess we’re still too far away from that…

Anyway, if the computer wins, I say it should take over Alex Trebek’s job. I mean, what does he get paid for anyway? He just stands there and reads stuff. Computers can already do that. And besides, he still has his life insurance spokesperson job to fall back on.

By S P Hannifin, ago