Interesting things

Jurassic Park as plotted by AI

Lately I’ve been fooling around with, particularly its “Dragon” model, which is perhaps based on GPT-3 (though I’m not sure). While the app is biased towards generating second-person adventure game text, I have found it fun to feed it some plot summaries and let it generate a continuation. The results are nonsense, illogical, and inconsistent, but funny.

In regards to story writing, the app can be a lot of fun for generating random ideas, but it’s just about useless (so far as I can tell) for generating appropriately constrained ideas, which are far more important to story writing. Stories, after all, have to go somewhere. Plots develop, characters develop, tensions rise and fall, etc. With only random ideas, the story just kind of meanders around randomly. Perhaps some of its pointless meandering can be tamed with proper prompting, but I have not yet found an effective strategy to achieve this. Perhaps future apps will be better designed for such a purpose, but perhaps it is just an inherent limitation with the AI model itself.

Below are the results of one of my expirements with having the app continue the plot of Jurassic Park. The first three paragraphs are copied from Wikipedia’s plot summary of the film, and the rest is from the AI. You can see that it makes very little sense, and would make a pretty terrible movie, but I found it humorous.

Another experiment involved having it continue the plot of the film Clue, in which Mr. Boddy’s ghost terrorizes the guests and Mrs. Peacock kills Mr. Green with an icicle. In yet another, the AI continued a biography of Mozart by having him get married at age nine and grow up to play pianos in taverns and murder his cousins.

Here’s the Jurassic Park plot continuation (the first three paragraphs are the prompt):

Industrialist John Hammond has created a theme park of cloned dinosaurs, Jurassic Park, on Isla Nublar, a Costa Rican island. After a dinosaur handler is killed by a Velociraptor, the park’s investors, represented by lawyer Donald Gennaro, demand that experts visit the park and certify its safety. Gennaro invites mathematician and chaos theorist Ian Malcolm, while Hammond invites paleontologist Dr. Alan Grant and paleobotanist Dr. Ellie Sattler. Upon arrival, the group is shocked to see a live Brachiosaurus.

At the park’s visitor center, the group learns that the cloning was accomplished by extracting dinosaur DNA from mosquitoes preserved in amber. DNA from frogs was used to fill in gaps in the genome of the dinosaurs. To prevent breeding, all the dinosaurs were made female. Malcolm scoffs at the idea, saying that it will inevitably break down. The group witnesses the hatching of a baby Velociraptor and visits the raptor enclosure. During lunch, the group debates the ethics of cloning and the creation of the park, and Malcolm warns about the implications of genetic engineering.

The group is joined by Hammond’s grandchildren, Lex and Tim Murphy, for a tour of the park, while Hammond oversees the tour from the control room. The tour does not go as planned, with most of the dinosaurs failing to appear and the group encountering a sick Triceratops. It is cut short as a tropical storm approaches Isla Nublar. Most of the park employees leave for the mainland on a boat while the visitors return to their electric tour vehicles, except Sattler, who stays behind with the park’s veterinarian to study the Triceratops.

[AI’s continuation]

Meanwhile, Malcolm enters the control room to find that the computer system has been sabotaged. He hacks into the system and finds that Hammond is dead and his son Michael has taken over. As he attempts to escape, he runs afoul of an angry Tyrannosaurus rex who chases him through a tunnel leading out of the building. He returns to his car where he is attacked by two more T-Rexes.

Malcolm manages to drive away, but is pursued by a third T-Rex. A fourth one follows, which knocks him off the road and onto a rocky hillside. He calls out for help and is rescued by Ellie Sattler, who drives up alongside his car and opens fire on the pursuing Tyrannosaurs. She kills them both with her rifle before they can reach Malcolm again.

Back at the visitor center, Malcolm tells Ellie that she saved his life and that he owes her a debt of gratitude. She asks if he wants to come back to the island with her, but he declines. He says there are things here he needs to do first. Meanwhile, Hammond’s grandson Lex has returned from the mainland and reports what happened to his father. While he’s explaining the situation, Hammond notices through a window that one of the rival bioengineers, Ed Regis, is in his proximity.

Hammond rushes out and shoots Regis multiple times before being restrained. He declares that they are at war, before ordering the computer system to be destroyed with a flamethrower.

The film then cuts to the outside of the visitor center, where there is a scream of pain from Ian Malcolm as the building explodes.

The following morning, law enforcement arrives on the island. Gennaro briefs them on the situation. While searching for survivors, they discover that all of the dinosaurs have been killed. The footprints discovered near the visitor center belong to a henchman working for the island’s owner, Peter Ludlow. Ludlow had learned that Hammond was dead and believed that he could get away with claiming the island. He also realized that he was trapped there with several hostile dinosaurs.

The second coup happens in the hospital room where John Hammond lies dying. His granddaughter Kelly has taken over his role as head of InGen and has accelerated the timetable for building a new Jurassic Park in San Diego. However, a police sniper takes her out from a nearby building, mistaking her for Ludlow. The blame shifts to Ludlow. The actual culprit is Ed Regis, who storms the room.

After killing Ludlow with a gunshot to the face, he shouts out that he didn’t kill anyone and that Ian Malcolm was the murderer before escaping via helicopter. With InGen destroyed, Hammond dead, Ludlow dead and Regis on the run, a third shadowy group is suggested as being responsible for the attacks. The investigation suggests that they have no known motive or target.

The book ends with ‘DNA donkeys’, a note about the next entry in the series Tey cats and a reminder that there are fourteen new engravings made out to readers of this book.

By S P Hannifin, ago
Interesting things

I still want AI that can help me write a novel

Just last month, OpenAI1 released a paper about their results with GPT-3, an AI human language model system which can be trained on many mountains of text, and then generate its own text based on given prompts.

It looks quite impressive! Impressive too are some of GPT-3’s creative writing results from freelance writer Gwern Branwen. Still has plenty of weaknesses, in humor and logic for example, so it won’t be replacing novelists yet, but I’m particularly impressed with GPT-3’s continuation of a scene from a Harry Potter fanfiction. I wouldn’t copy and paste the results, but it looks like it would be great for generating story ideas, both in a novel’s overall plotting stage, and at the actual scene-writing stage. I find the scene-writing stage to be the most tedious and mentally demanding (hence why I’ve procrastinated on doing it for a few years now); I would love to have a program that continually generated ideas for directions a scene could go, either by having it generate a possible continuation or answering prompts with ideas, such as “How might this character respond to this situation?”.

Other possibilities with GPT-3 (or future models) are equally exciting. I’d love to see GPT-3 or something like it applied to things like:

  • Dialog for non-player characters in video games
  • Cohosting a podcast with me
  • Generating comments for this blog so it looks like I have more readers
  • Being an imaginary friend because I’m sad and lonely

One weakness of GPT-3 (and most neural-network based AI for that matter) is that we may not be able to see directly how it generated its answers to prompts. That is, how do we know it’s not plagiarizing or stealing too many ideas from its training data? It may become a thorny issue for some uses.

David Cope’s older algorithmic music generating system, for example, had similar problems. This is I believe 20-something years old, but here’s a computer-generated piece in the style of Mozart:

Sounds great, but if you’re familiar with Mozart, it’s actually not that impressive; there’s just too much Mozart that’s been too directly copied; it’s just not “creative” enough. A patron of Mozart would likely be dismayed, “this is just a rehash of this and that symphony; I want something in your style, but more fresh!”

I doubt GPT-3 always copies from its training data that overtly, but the possibility could still be a problem.

The other big problem, from my perspective at least, is cost. GPT-3 requires too much computer power that I can’t afford to pay for. OpenAI will probably target enterprise users for their first customers, not poor novelists.

There will probably be other options though. For example, there is the recently launched InferKit which I believe is based on GPT-2. Maybe I’ll experiment with that as the pricing seems fair enough, but my previous creative fiction results with GPT-2 weren’t great, especially when it would have characters from other copyrighted novels like Gandalf pop into scenes. I probably just have to hone in on some good methods for idea-prompting.

Anyway, the future of AI continues to fascinate and excite me!

By S P Hannifin, ago
Interesting things

My Solution to the Collatz Conjecture

As promised, here’s my attempted solution to the Collatz Conjecture. My solution is pretty simple, so if you understand the conjecture, you should understand the proof. (I’m not a pro mathematician anyway, just an amateur hobbyist.) I’m eager to get feedback, especially if I somehow missed something subtle (or worse, something really stupid).

PDF of my proof: click here.

If you prefer to watch a video instead, I’ve uploaded myself explaining my solution here:

Here’s to hoping my proof is confirmed!

By S P Hannifin, ago
Interesting things

The Collatz Conjecture

I’ve been tinkering with the Collatz Conjecture on and off for a couple years; it’s madly addicting, patterns within patterns within patterns, and yet strange and puzzling disorder seems to lurk around every corner.

I have an attempted proof which I’ll type up and post hopefully this week along with a video, unless I find some glaring mistake while doing so. And then I can get back to programming.

By S P Hannifin, ago
Interesting things

Common story arcs as identified by AI

According to this article:

researchers from the University of Vermont and the University of Adelaide determined the core emotional trajectories of stories by taking advantage of advances in computing power and natural language processing to analyze the emotional arcs of 1,737 fictional works in English available in the online library Project Gutenberg.

The paper can be found on They discovered six emotional arcs (which also just happen to exhaust all possible alternating binary arcs… in other words, they didn’t really “discover” anything, haha)

1. Rags to Riches (rise)
2. Riches to Rags (fall)
3. Man in a Hole (fall then rise)
4. Icarus (rise then fall)
5. Cinderella (rise then fall then rise)
6. Oedipus (fall then rise then fall)

I’m not sure their results are all that helpful; any experienced storyteller understands this stuff naturally. It is somewhat interesting to see it correspond so strongly to a story’s word usage, though.

I was also interested in their little plot of the emotional arcs in Harry Potter and the Deathly Hollows, which can also be found in this article from The Atlantic. If you check it out, you’ll notice that the second act conforms pretty perfectly to Blake Snyder’s Save the Cat story beats. The first act mirrors this, in terms of there being three main peaks, or three pairs of falls and rises. I’ve started calling these “the three trials”, and most stories tend to conform to this. After the story’s catalyst (or including the story’s catalyst), the story goes through three falls and rises before reaching the “false high” of the midpoint. Many times, a rise will cause a fall in the B story. That is, the plot lines tend to alternate naturally with direction of the emotional arc (though not only at these points, mind you). For example, the hero might, say, punch a bully (rise in plot line A), only to discover his girlfriend wants to break up with him (fall in plot line B).

The “three trials” may be subtle, such as the thematic arguing in the first half of Jurassic Park. (Though if you’re going to make them as subtle as they are in Jurassic Park, the theme better be as interesting as resurrecting dinosaurs. And the characters should actually argue their sides as adamantly as John Hammond and Ian Malcolm; they can’t just stand there and wonder.) I’d identify the three trials of Jurassic Park as:

1. “Life finds a way” – After the thrill (rise) of seeing their first dinosaurs, Ian Malcolm argues the whole thing is bound to end in disaster (fall)
2. “Dinosaurs on your dinosaur tour?” – The guests are excited to start their tour (rise) but fail to actually see any dinos (fall)
3. “Nedry’s betrayal” – The guests are happy to gather around a sickly dino (rise) but as a looming storm forces the tour to be cancelled, Nedry begins his plan of betrayal (fall)

The escape of the t-rex then serves as the midpoint of the film.

OK, that was a tangent, but it’s a good plotting exercise to identify the “three trials” of a story’s first act; I have found it helps a lot in plotting. The arcs of stories that are more “episodic” may not be connected so much, whereas in tighter stories, each rise causes the following fall, and each fall leads to or makes possible the following rise.

(On a side note, it would be interesting to see how film music conforms to these emotional arcs.)

The Atlantic article goes on to mention:

Eventually, he says, this research could help scientists train machines to reverse-engineer what they learn about story trajectory to generate their own compelling original works.

OK, good luck with that. I think emotional-arc mapping should be the least of your concerns if you’re striving for computer-generated stories.

The article writer from the No Film School article, on the other hand, goes on to write:

But I sincerely doubt a computer or AI that we train to write stories will ever be able to find joy, no matter how much emotional value we assign to its database of words.

But, uh…. who cares if the computer can “find joy”? Your role as an audience member, as a consumer of a product, does not necessarily need to include making some emotional connection with the author, as that can only ever be imagined in your own head to begin with. This is similar to the morons who experience an uneasiness listening to computer generated music, as though all this time they were imagining the beauty of music came not from something eternal in nature, but was rather infused into the music by the author’s brain, as though the author created the beauty rather than merely discovered it in the realms of infinite possibility. Does that distinction make sense?

I doubt anyone needs to be concerned about AI storytelling anytime soon though, anyway, as we still don’t quite understand our human ability to use language. We’re much closer to programming a Mozart Symphony Generator (we’re only a fraction of an inch away from that, if not already there). Problem with language programming is that a lot AI researchers try to “cheat”; rather than searching for a deeper understanding of how humans use language, they try to turn it into a simple numbers game, like gathering statistics on word associations. That may be useful for autocomplete functions, but won’t help much with the creation of a serious story, or even a serious paragraph. Words have meanings, and you can’t simply take those meanings for granted, as if they’ll just take care of themselves if you map out word associations enough. We may need to figure out a way to represent those meanings without having to create a bunch of “experiences” for a computer to associate them with, if that’s possible. I have no idea. (And if I did, I would keep it a secret so that I could use it in a grand conspiracy to take over the world, which would fail, but would be turned into a great Hollywood film.)

Another interesting website to fool around with is whatismymovie?, an attempt at creating an AI to help you find an interesting movie. It sometimes comes up with some strange results, but it’s fun to play around with.

By S P Hannifin, ago
Interesting things

Mr. Conductor

I recently watched the 2002 Russian film Russian Ark, a 90 minute film done entirely in one take. The premise was a bit bland, there’s not really a story in the traditional sense, it’s more like a romanticized time-traveling tour through a grand museum. (The Hermitage museum in Saint Petersburg; looks like an awesome place to visit, even just for the beautiful grand palace architecture.) Anyway, near the end there’s this dance scene with an orchestra playing in this huge ballroom and the conductor looked really familiar. I was sure I had seen him in college.

In my college days, George Mason University offered students free concert tickets (if there were any left), which I took advantage of whenever I could. Usually free student tickets get placed in the way way back of the balcony, but one time I was seated in the front row, so close I could reach out grab the conductor’s ankle. OK, maybe not that close, I was a bit off to the side, but it was pretty close. I could definitely read the string musician’s score from my seat, so it was quite close. I had hardly any view of most of the orchestra, but I was up close and personal with the front row musicians and looking right up the right side of the conductor. And the conductor was hard to forget. He was really into the music, and was a bit distracting for someone in the front row, because I could hear him grunt throughout the music, and even see the sparkle of small bits of spittle that would fly out when things got particularly impassioned.

So I’m watching Russian Ark and the conductor looked familiar and I thought: “That’s the guy! Isn’t it?” So I had to look through my programs to confirm that, yes, it was the famous Russian conductor Valery Gergiev. You can see that he has a memorable face, and does indeed grunt and make noises while conducting. Below, for example (at 45:45). Fun stuff.

By S P Hannifin, ago
Interesting things

Slash as a conjunction word

Here’s an interesting article about the word “slash” becoming a new modern conjunction word, as when people say the word to mean what its corresponding symbol means in writing, as in: “I think I’m going to watch TV slash take a nap.”

I have used the term myself, though not often, and I would never spell out the word in writing, such as in a blog/article.  (See?)  And when I say it, I prefer to physically slash the air with two fingers for gesticulatory emphasis.

Of course, we can quickly infinite loop the definition of “slash” by defining it as “and slash or” meaning “and and slash or or” meaning “and and and slash or or or” ad infinitum.

Anyway, it’s interesting to see how language evolves like this.  I’m always annoyed when people say “that’s not a word” as if only some select group of humanity has the ability to decide what is and isn’t a word.  There’s a fine argument to be made that just making up a word or changing a word’s definition without anyone’s consent will only hurt your chances of being understood when you try to communicate, but if the meaning is clear by the word’s context and the origins of the word’s roots, language can be completely gruptious.

By S P Hannifin, ago
Interesting things

Arthur C. Clarke on the future…

Few things:

– I didn’t realize he had that sort of accent; I imagined something more Britishish
– I like how future cities always seem to be taken over by what that time period considered “modern” architecture. I can’t imagine our sense of style changing that rapidly over too short a period of time. But of course I only say that in retrospect…
– In some sense he’s right that communication (the Internet) has transformed business and economics, but so far not nearly as much as he predicted. We still have to commute for work, for example.
– I guess someone predicting the future and giving no dates can never really be wrong.

By S P Hannifin, ago
Interesting things

Sounds good…

I didn’t really learn anything from this (because, you know, I’m just so smart), but I thought this was a great primer on how sound works, and how it relates to music. I think it just goes a bit too fast. Slow down!

She says at about 10:55:

And we’re still pretty far from developing technology that can listen to lots of sound and separate it out into things anywhere near as well as our ears and brains can.

I wouldn’t be so sure of that…

By S P Hannifin, ago