AI generated movie trailer fails to impress

IBM’s Watson supercomputer AI has created a trailer for an AI horror film! Oh my! How interesting! How ironic! How impressive! IBM is full of geniuses! Let’s watch!

Erm… ok…

Alas, I am not at all impressed with the result. This trailer tells me hardly anything about the story. I fear we’ll have to wait until AIs actually “understand” language and story (or at least analyze these elements a bit more closely) before they can create trailers that resonate with humans. Who are the characters? What’s the main conflict of the story? What’s the spiritual (inner) conflict? What’s the hook? Etc. Trailers are not just a collection of tone shifts. What stupid investors are investing in IBM based on this sort of nonsense? (And how can I get some of their money myself?)

Anyway, what we end up with is not so much a “movie trailer created by AI” as though “AI” were some generic mysterious black box. Rather, it’s a movie trailer created in some algorithmic fashion that a human (or group of humans) designed. Which, of course, is what all “AI-generated” products amount to — human-created algorithms to mimic and automate processes we may not necessarily understand.

And therein lies the true goal of “AI research”. The point is not to create a robot that can do everything a human can do but remains just as mysterious as a human brain. The point is to understand what intelligence actually is in the first place. And when we understand that, we may find we don’t need or care about sophisticated human-like robots anyway. And any sort of creepy fear that comes from wondering about the possibilities of rogue robots or the nature of digital consciousness is the result of human idiocy, spiritually and logically. Spiritually in that consciousness is not merely an emergent property of matter (we are not just meat robots). Logically in that if we could design a robot capable of “going rogue” then we can just as easily design it to not “go rogue” in the first place.

“What if the AIs kill us?!” It’s already not that hard to make a machine that can kill you; why is a robot doing it somehow more scary? I suppose because you don’t understand where the “impulse” to kill is coming from. And anyway, if we’re smart enough to create robots that can actually decide to kill in some humanly way, then we’d naturally understand where that decision comes from in the first place and would prevent it (or override the capacity to decide not to kill if we’re making an evil robot army I guess).

(Of course some AI research is perfectly happy to stay within the bounds of mimicking and automating thought processes, as these algorithms can have useful applications, such as handwriting recognition software or my own forays into algorithmic music generation, which is ultimately music theory research.)

And let us not soon forget the actual screenplay written by an artificial neural network:

And the Oscar goes to…

The universality of superior intelligence

I’ve heard it theorized that if we ever contact sentient intelligent aliens from other planets, we may have no way to relate to them because their methods of thinking will be too outlandish for us. They will think in fundamentally different ways.

Nonsense, I say! While there may be some variations on thought processing speed, memory, and perceptions (being able to hear different sound frequencies, for instance, or having a stronger sense of smell, or perhaps being able to sense infrared light, though I’m not sure what good that would do), I theorize the foundations of intelligence are like the laws of physics or mathematics; they are universal. Nature always hones in on the same principles.

In this way, I believe humans are the most intelligent possible beings in the universe. If something cannot be understood by a human, then it cannot be understood by any physical being at all. There may be aliens just as intelligent as humans, but there can be no aliens with “superior” intelligence (of the “I understand things you cannot even fathom!” sort, not the “I can do math in my head faster than you!” sort), because there exist no different systems of logic that are just as valid as the system humans use, because our system is based on immutable principles ingrained in the nature of nature of itself. (I do not mean the system of logic as defined by mathematical laws in a text book; these systems are incomplete. We do not yet fully recognize the logic we use, yet we use it naturally. Its subtle simpleness and ease of use is what makes so hard to find, but we’re getting there.)

In Search of Strong AI

While trying to work on my novel, my mind sometimes turns to mush and I can’t think creatively, at least not in the way that novel-writing calls for. So I began a journal with which to chronicle my thoughts and explorations as I search for Strong AI. I would love to live to see Strong AI achieved; who wouldn’t?

My short term goal, however, is to create a computer program that can teach itself to play chess (or any rule-based game) in such a way that we can observe the rules that it learns. As far as I know, no one has achieved this. Chess engines focus on number-crunching algorithms, using the computer’s ability to calculate quickly to its advantage rather than trying to simulate how a human would learn things. But if we can figure out how a human learns the game, I think the algorithms involved would be far more useful to advancing human knowledge than number-crunching algorithms created specifically for the game. I want an algorithm that creates algorithms.

Anyway, I have written up my explorations so far in my new little journal. You can download a PDF of the journal here. It’s a bit too long and clunky to post as a blog entry. I hope that as I continue to explore the subject, I will write and upload more journal entries.

Not sure anybody else out there is interested in the subject, but I’ll put it out there in case anyone is curious. Join me, and together we will rule the world.

InSearchOfStrongAI-Part01.pdf