When I started developing Mr Football, I knew that I had a number of skills that were going to be useful moving into a new industry. HTML coding is not that dissimilar to Phaser, as they both lean heavily on JavaScript these days. I am a journalist by training, so I knew I could produce prose of a good enough quality for any part of a game. I have learned Photoshop well enough to produce low-quality pixel graphics, but no more. High-resolution art, though… no, I have never been any good at all producing original art.
My original thought at producing 650 player portraits for use in Mr Football was that I would go with a low-pixel retro look – something that has been forced on many people in my position with poor skills – and try to photofit my way to something that looked new by manipulating existing images. This is not dissimilar to the way a lot of professionals produce modern music: it’s not as if anyone can sing a new note any more, they have all been discovered already. In this age of whatever they call what came after post-modernism, it’s a matter of rearranging and reimagining for most of us mere mortals rather than letting divine inspiration flow through from the phlogiston.
To illustrate my process, I used studio portraits of existing footy players, which were (very helpfully) all facing the same way with their bodies at the same angle, and lit much the same way. I would take three different pictures which were pixelated down to more workable sizes, like so:
To create a new player, I would take: the hair and forehead of the first; the eyes, nose and ears of the second; and the mouth and chin (plus any facial hair) of the third. This process would be repeated three times with each player’s different face parts, with a bit of Photoshop touching up to make sure facial colouring matched and things looked roughly symmetrical, plus blackening the jumpers to mask their original team. (This is a quick job for the purposes of this post, I would normally spend more time matching skin tones.)
I repeated this process over 200 times over the past couple of years to create over 600 “original” player portraits. They were pixelated, which was kind of retro, but when you’re competing with the likes of FIFA Mobile who have real players in pro-quality photographs, if I was being completely honest… it looks a bit naff on modern high-resolution phones. Maybe okay for an indie game where expectations are lower, but I would have preferred to have the budget to pay artists to create original assets.
Enter Midjourney. I looked at what Midjourney could produce and it was a no-brainer to convert my photofit portraits into AI-generated art. Here is what happens when I feed the above three images into Midjourney with the starting prompt of Australian-rules-footballer, wearing-black-sports-jersey:
Okay, so it has some trouble with bald dudes, and it seems to be drawing its inspiration mostly from US college basketballer players wearing black singlets. Also, it doesn’t really understand stitching on a Sherrin football. That’s okay. Some of the backgrounds it produces are harder to delete than others using magic wand in Photoshop, but nothing I can’t handle. The end results are certainly leaps and bounds ahead of what I had, and are more than good enough as art assets for a mobile game.
The key here is the ability to iterate at lightning speed. If Midjourney spat out four images I didn’t like, I could just put exactly the same parameters in again, and it would use a new random seed to generate four new attempts. Early in the process I tended to see the same two or three face types pop up too often, as Midjourney seems to have a limited idea of what an Australian sportsman looks like – probably because its database doesn’t include a lot of them. I experimented with adding prompt phrases with the names of different countries, and had a lot of success. Then after that got samey as well, I started adding phrases describing hairstyles or facial features – just to reinforce their nature to the algorithm even though they were on the image in pixelated form – which changed things up again nicely.
Though I started off the Midjourney process bemoaning the hundreds of hours of work I had put in to produce the “original” pixelated composite photofit portraits, I didn’t realise that all that hard work was going to pay off as what I was doing was what is called “photobashing”. This is a jargon phrase in the art world that used to describe techniques to “merge” and “blend” existing images to create something new. This portmanteau term arose well before the advent of AI tools like Midjourney as post-modern artists explored the possibilities of rearrangement and recasting as a form of original expression, but is now perfectly suited to refer to the Frankensteinian process of compositing images and then feeding them into an AI program so that it can advance the process as far as the operator can imagine… within seconds.
Perhaps there is a debate here about copyright or originality of thought. That is for another day. For now, my art asset creation problems have been solved.
Leave a Reply