IBM’s Watson supercomputer AI has created a trailer for an AI horror film! Oh my! How interesting! How ironic! How impressive! IBM is full of geniuses! Let’s watch!

Erm… ok…

Alas, I am not at all impressed with the result. This trailer tells me hardly anything about the story. I fear we’ll have to wait until AIs actually “understand” language and story (or at least analyze these elements a bit more closely) before they can create trailers that resonate with humans. Who are the characters? What’s the main conflict of the story? What’s the spiritual (inner) conflict? What’s the hook? Etc. Trailers are not just a collection of tone shifts. What stupid investors are investing in IBM based on this sort of nonsense? (And how can I get some of their money myself?)

Anyway, what we end up with is not so much a “movie trailer created by AI” as though “AI” were some generic mysterious black box. Rather, it’s a movie trailer created in some algorithmic fashion that a human (or group of humans) designed. Which, of course, is what all “AI-generated” products amount to — human-created algorithms to mimic and automate processes we may not necessarily understand.

And therein lies the true goal of “AI research”. The point is not to create a robot that can do everything a human can do but remains just as mysterious as a human brain. The point is to understand what intelligence actually is in the first place. And when we understand that, we may find we don’t need or care about sophisticated human-like robots anyway. And any sort of creepy fear that comes from wondering about the possibilities of rogue robots or the nature of digital consciousness is the result of human idiocy, spiritually and logically. Spiritually in that consciousness is not merely an emergent property of matter (we are not just meat robots). Logically in that if we could design a robot capable of “going rogue” then we can just as easily design it to not “go rogue” in the first place.

“What if the AIs kill us?!” It’s already not that hard to make a machine that can kill you; why is a robot doing it somehow more scary? I suppose because you don’t understand where the “impulse” to kill is coming from. And anyway, if we’re smart enough to create robots that can actually decide to kill in some humanly way, then we’d naturally understand where that decision comes from in the first place and would prevent it (or override the capacity to decide not to kill if we’re making an evil robot army I guess).

(Of course some AI research is perfectly happy to stay within the bounds of mimicking and automating thought processes, as these algorithms can have useful applications, such as handwriting recognition software or my own forays into algorithmic music generation, which is ultimately music theory research.)

And let us not soon forget the actual screenplay written by an artificial neural network:

And the Oscar goes to…


4 Comments

LanthonyS · September 2, 2016 at 10:55 AM

Good thoughts; I quite agree. That scene at 1:00 in the trailer could have been pretty good trailer material, but the timing was wrong somehow …

I just watched Sunspring twice. So great. Much humour. Why should we know what anyone is talking about when half the time they’re asserting that they don’t? Brilliant last monologue, though.

S P Hannifin · September 3, 2016 at 12:51 AM

I thought a few of the scenes presented in the trailer could’ve been interesting had they been presented in a more meaningful context. It’s kind of annoying, because I quite like the idea of creating an algorithm to make movie trailers. It’s a fun idea. But I’m not convinced the guys behind this particular exercise put as much effort into it as they could have. For instance, perhaps it might’ve been interesting if the computer had analyzed the screenplay for characters who get the most screen time, and/or had analyzed the dialog for clues about the film’s themes. Instead it looks like they decided upon this bizarre “Salient Sentiment Change Detection” and “analyzing a scene visually to determine if it was scary”? What the heck? The visuals seem a very superficial way to determine whether or not a scene is scary… unless the director said something like, “All the scary scenes are red!” It seems a wasted opportunity.

Sunspring, I think, is evidence that the singularity has been reached and the AI screenwriter is so far advanced beyond human intellect that we can barely begin to grasp the subtle genius upon which each line of dialog is carefully constructed, much less have any practical use for it, so brilliant is its mind. The eye being coughed up by the guy at 2:10 is a clearly symbolic of the AI’s birth, a sign that the AI knows what’s going on…

LanthonyS · September 3, 2016 at 1:13 PM

Yes re: Sunspring …

For the visual parts, I think the general use of colour and light in today’s cinematography actually does validate the tone shifts. So many “serious”/”gritty” scenes are washed-out, dark, etc., whereas happy scenes are vivid and bright…

S P Hannifin · September 4, 2016 at 6:24 AM

In regards to tone, I definitely agree that color plays a huge part, but I’m not sure I’d consider “scary” to be a “tone” in and of itself. Though I suppose I’d concede the point if the director purposefully constructed scenes he thought to be “scary” in such a fashion that a computer could find them by such an analysis. (Although that would make an AI analysis rather a bit pointless. That is, if you define “scary” for the computer by way of such measurable means, it’s hardly an “AI” task to simply then measure those things.) Anyway, I wouldn’t consider “scariness” a meaningful measurement for trailer inclusion (at least not in and of itself).

Leave a Reply

Avatar placeholder

Your email address will not be published.

*