AI generated movie trailer fails to impress

IBM’s Watson supercomputer AI has created a trailer for an AI horror film! Oh my! How interesting! How ironic! How impressive! IBM is full of geniuses! Let’s watch!

Erm… ok…

Alas, I am not at all impressed with the result. This trailer tells me hardly anything about the story. I fear we’ll have to wait until AIs actually “understand” language and story (or at least analyze these elements a bit more closely) before they can create trailers that resonate with humans. Who are the characters? What’s the main conflict of the story? What’s the spiritual (inner) conflict? What’s the hook? Etc. Trailers are not just a collection of tone shifts. What stupid investors are investing in IBM based on this sort of nonsense? (And how can I get some of their money myself?)

Anyway, what we end up with is not so much a “movie trailer created by AI” as though “AI” were some generic mysterious black box. Rather, it’s a movie trailer created in some algorithmic fashion that a human (or group of humans) designed. Which, of course, is what all “AI-generated” products amount to — human-created algorithms to mimic and automate processes we may not necessarily understand.

And therein lies the true goal of “AI research”. The point is not to create a robot that can do everything a human can do but remains just as mysterious as a human brain. The point is to understand what intelligence actually is in the first place. And when we understand that, we may find we don’t need or care about sophisticated human-like robots anyway. And any sort of creepy fear that comes from wondering about the possibilities of rogue robots or the nature of digital consciousness is the result of human idiocy, spiritually and logically. Spiritually in that consciousness is not merely an emergent property of matter (we are not just meat robots). Logically in that if we could design a robot capable of “going rogue” then we can just as easily design it to not “go rogue” in the first place.

“What if the AIs kill us?!” It’s already not that hard to make a machine that can kill you; why is a robot doing it somehow more scary? I suppose because you don’t understand where the “impulse” to kill is coming from. And anyway, if we’re smart enough to create robots that can actually decide to kill in some humanly way, then we’d naturally understand where that decision comes from in the first place and would prevent it (or override the capacity to decide not to kill if we’re making an evil robot army I guess).

(Of course some AI research is perfectly happy to stay within the bounds of mimicking and automating thought processes, as these algorithms can have useful applications, such as handwriting recognition software or my own forays into algorithmic music generation, which is ultimately music theory research.)

And let us not soon forget the actual screenplay written by an artificial neural network:

And the Oscar goes to…

Computer plays Jeopardy, but does not know how to love

According to this article on Engadget:

So, in February IBM’s Watson will be in an official Jeopardy tournament-style competition with titans of trivia Ken Jennings and Brad Rutter. That competition will be taped starting tomorrow, but hopefully we’ll get to know if a computer really can take down the greatest Jeopardy players of all time in “real time” as the show airs. It will be a historic event on par with Deep Blue vs. Garry Kasparov, and we’ll absolutely be glued to our seats. Today IBM and Jeopardy offered a quick teaser of that match, with the three contestants knocking out three categories at lightning speed. Not a single question was answered wrongly, and at the end of the match Watson, who answers questions with a cold computer voice, telegraphing his certainty with simple color changes on his “avatar,” was ahead with $4,400, Ken had $3,400, and Brad had $1,200.

This is kind of interesting because what makes a computer good at Jeopardy is the opposite of what makes a human good at Jeopardy.

A computer can easily store vast amounts of data, but cannot so easily process human language.

A human can easily understand language, but we can’t easily store vast amounts of data. After all, the entire point of Jeopardy is not understanding the question, but knowing data that most humans don’t use in every day life.

So I think the real achievement here is in language processing — being able to output a specific answer based really only on an incoming string of letters (or maybe sound waves).

It’s easy to understand how such an achievement could be useful: imagine being able to type a question into Google and getting a direct answer (or at least a direct guess) instead of just a bunch of webpages that make you search for the answer yourself. Even though searching for the answer yourself doesn’t always take that much time, getting a direct answer would be much more convenient. Or imagine being able to speak a question into your phone or your car’s dashboard while driving, when you can’t browse the web without risking death, and having it speak back a direct answer. Imagine being able to cheat easily while you’re playing trivia games with your friends who are judging your intelligence and value as a friend based on how many useless random things you know.

While this would be nice technology for us to have, it still doesn’t have the power to create so much, does it? When will we have computers that can formulate their own sentences? That can write metaphors? That can write entire books? I guess we’re still too far away from that…

Anyway, if the computer wins, I say it should take over Alex Trebek’s job. I mean, what does he get paid for anyway? He just stands there and reads stuff. Computers can already do that. And besides, he still has his life insurance spokesperson job to fall back on.