Training GPT-2 with a bunch of text is super easy. The trickiest part was writing code to manage those results into something usable. My "generation" code does this. In this way it's not at all generating, but instead normalizing output and feeding input to GPT-2.
First it gets a few lines to start off with. Keeps the good lines, throws out the rest. Then it feeds in a few lines and has GPT-2 carry on from there. Then it repeats these steps until all 64 lines are filled, converts to an image, and saves the results.