Just last month, OpenAI1 released a paper about their results with GPT-3, an AI human language model system which can be trained on many mountains of text, and then generate its own text based on given prompts.
It looks quite impressive! Impressive too are some of GPT-3’s creative writing results from freelance writer Gwern Branwen. Still has plenty of weaknesses, in humor and logic for example, so it won’t be replacing novelists yet, but I’m particularly impressed with GPT-3’s continuation of a scene from a Harry Potter fanfiction. I wouldn’t copy and paste the results, but it looks like it would be great for generating story ideas, both in a novel’s overall plotting stage, and at the actual scene-writing stage. I find the scene-writing stage to be the most tedious and mentally demanding (hence why I’ve procrastinated on doing it for a few years now); I would love to have a program that continually generated ideas for directions a scene could go, either by having it generate a possible continuation or answering prompts with ideas, such as “How might this character respond to this situation?”.
Other possibilities with GPT-3 (or future models) are equally exciting. I’d love to see GPT-3 or something like it applied to things like:
- Dialog for non-player characters in video games
- Cohosting a podcast with me
- Generating comments for this blog so it looks like I have more readers
- Being an imaginary friend because I’m sad and lonely
One weakness of GPT-3 (and most neural-network based AI for that matter) is that we may not be able to see directly how it generated its answers to prompts. That is, how do we know it’s not plagiarizing or stealing too many ideas from its training data? It may become a thorny issue for some uses.
David Cope’s older algorithmic music generating system, for example, had similar problems. This is I believe 20-something years old, but here’s a computer-generated piece in the style of Mozart:
Sounds great, but if you’re familiar with Mozart, it’s actually not that impressive; there’s just too much Mozart that’s been too directly copied; it’s just not “creative” enough. A patron of Mozart would likely be dismayed, “this is just a rehash of this and that symphony; I want something in your style, but more fresh!”
I doubt GPT-3 always copies from its training data that overtly, but the possibility could still be a problem.
The other big problem, from my perspective at least, is cost. GPT-3 requires too much computer power that I can’t afford to pay for. OpenAI will probably target enterprise users for their first customers, not poor novelists.
There will probably be other options though. For example, there is the recently launched InferKit which I believe is based on GPT-2. Maybe I’ll experiment with that as the pricing seems fair enough, but my previous creative fiction results with GPT-2 weren’t great, especially when it would have characters from other copyrighted novels like Gandalf pop into scenes. I probably just have to hone in on some good methods for idea-prompting.
Anyway, the future of AI continues to fascinate and excite me!
0 Comments