Hands-on with OpenAI’s famous GPT-2 deep fake text AI

Mark Farragher
5 min readMar 19, 2019

By now you’ve probably read about OpenAI’s famous GPT-2 neural network which can produce utterly convincing ‘deep fake’ articles, stories, and reviews on virtually any subject.

I’ve always wanted to get my hands on an AI that can produce realistic text, so I couldn’t wait to take GPT-2 for a spin and run a few tests of my own.

There are 4 versions of GPT-2:

The full version that OpenAI is keeping under wraps is a monster of a neural network with 1.5 billion trained parameters and 48 layers.

What they’ve released is the smallest model with 117 million parameters and 12 neural network layers.

This small model is still very good, getting state-of-the-art results on the LAMBDA, CBT-CN, CBT-NE, and WikiText2 tests:

So let’s download the AI and have some fun!

GPT-2 runs on Python3, so the first thing I need to do is install it on my Mac using Homebrew:

$ brew install python3

Next up is downloading the GPT-2 code from GitHub: https://github.com/openai/gpt-2

And installing the package dependencies:

$ pip3 install fire
$ pip3 install regex
$ pip3 install requests
$ pip3 install tqdm

The source includes a handy Python script to download the trained model:

$ python3 ./download_model.py 117M

This will set up a new /models folder and download the trained neural network configuration into it.

And that’s it, we’re ready to run!

The conditional sample is the most fun. It will generate a text response for any given input prompt:

$ python3 ./src/interactive_conditional_samples.py
Model prompt >>>

I don’t know about you, but the first thing I always want to ask an AI is how it feels about the antagonistic Skynet system in the Terminator movies.

So let’s get that out of the way first:

Me: “I stumbled upon a reference from James Cameron stating that Skynet has felt guilty for 30 years with regards to the near extinction of human life. He even suggests that Skynet has used…

--

--