Making EPIC


See examples of the work at: http://www.usefulslug.com/2011/09/plankton-invasion/

Over the past few months I’ve had the pleasure of working on the rendering and effects side of Plankton Invasion, and in so doing I created a very extensive system that would go from A to as near Z without artist intervention in as many cases as possible on this effects heavy animated TV production. The system was dubbed EPIC (Extreme Plankton Invasion Code).
The system was very successful in how it worked and in this post I will talk a little bit about the experiences I had in building it and hopefully it can help trigger some ideas for others, or serve as a help when thinking about your own systems.

— If you want to see the effects of this, check the post about plankton invasion in general: http://www.usefulslug.com/2011/09/plankton-invasion/

First a little brief about my experience before plankton invasion so that you know where I am coming from. This way of working was quite new to me, so that is why I would like to document it a bit, and my path to programming is part of that story.
I’m educated as a traditional illustrator, and taught myself 3d from I was about 16 years old, so that’s a good 13 years now. After college I taught myself how to rig better because I wanted to be an animator, and had realized that good rigs were important. I started getting work as a freelance generalist and character technical director. Already after the first TV pilot I worked on, I realized how useful scripting and programming was, so I decided to try to learn that by reading open source code for some expressions used in messiah:studio by christopher lutz. I didn’t know much about math, next to nothing about programming and the source code to the first major plugin I made, ‘Walker’, is so hideously inefficiently and inelegantly written that it is almost embarrassing. But the system worked, and ‘fake it til you make it’ was going to be my modus operandi from now on. Several plugins, scripts, rigs, procedural animation systems, short films, small scale feature, music videos and adverts later my knowledge of what was needed in fast paced 3d production had grown, and I was called in to work on Plankton Invasion, a 78 x 7 minute FX heavy TV show being made by TeamTo and Grid VFX.

The obvious problem in producing TV animation, is that if you want it to be any good, you need to do a mother-load (excuse my french) of work, and you don’t have the luxury of time. That’s just the reality of that, plain and simple, and if you can’t stand that heat, then the kitchen is no place to be. Luckily TeamTo and Grid VFX are both very experienced facilities.
I started making EPIC at Grid, when I saw that we were repeating a lot of tasks as vfx artists. Generating UV passes for explosions, adding particles to feet as the characters were running in the sand, dust impacts, fire on matches, objects on fire, smoke plumes, smoke trails, water splashes, depth of field focus fixes (the characters are really small so they are being filmed by tiny cameras!), tv screens, and a whole bunch other effects needed to be done for the show (about 60-70 different types of effects and fixes were eventually automated and added to the system).
For season 1 (which we just finished), this was 6.5 minutes per episode (title’s about 30 sec) x 39 = 253.5 minutes of animation to render. On average, about 50% of shots were effects shots (remember, not just explosions!) so that’s something like 126.75 minutes of effects shots.
In one year, we made roughly two hours of effects in season 1, chew on that while I adjust my hat.

Lesson One: Is it ready yet?
EPIC had to be made during the production, gradually expanding as new episodes came with new effects to put in. So it needed to be ready to use all the time. I chose to build it in python, even though I had never coded in python before, because I knew it was a flexible scripting language, and this system would have to be very dynamic. I built a system that would read from a .csv file, the breakdown of which effect was tagged in each shot. Any shot could have multiple effects. The system would then basically do a big ‘swich/case’ for each effect (basically a bit of code that would determine which function to call based on the name of the effect). Each effect had it’s own response, like generating certain render passes, or adding something to the compositing flow, and these were functions that were generalized in two code files, one for doing stuff in 3d, and one for doing stuff in compositing. I used text files as templates (I didn’t know about the template module at the time) and directly modified the ascii files for compositing and 3d with these.
The system was basic enough that it could be built in a hurry, and it could be used straight away, even if only for one effect at the time, but it was already saving time. Lesson One: Make it lean and mean but ready for use immediately, you can expand later.

Lesson Two: Can I do that in a more general way?
Treating 3d and compositing files directly in ascii format using text replacement, regular expressions and templates is not always straightforward. What each effect needed to be able to do was very variable. Some effects needed to place particle emitters, some required a bit of math to do rotations or other calculations inside the 3d scenes, some needed to adjust parenting and a whole bunch of different render settings and things needed to be tweaked for each type of pass. For the first effects I just took the whole block of text that made up the effect in comp or 3d, and just pasted it into the right place in the ascii file and tested the file in the 3d/compositing programs to make sure I hadn’t screwed anything up. That wasn’t going to be enough when things got complicated. At first it was just one or two effects that needed something special changed in the file so I just made some functions to set all the things at the same time. Bad idea, mostly. Soon enough I needed to combine some of the settings from one effect, and some of the settings of another effect, and set some third setting in a different way. I needed to start doing things more generally. Gradually I created all the functions that were needed, like setting lighting options, adding particle emitters, parenting stuff etc. Now the responses in the effect code themselves were easier to make, as they just needed to call any of the functions they required. I still kept some functions ‘big’, like doing all that was required to create a matte pass for an object (checking a bunch of options, looking for names and setting output paths) as a single function, because it was effective to be able to just call ‘createMattePass(path,objectNames)’
Lesson Two: Make functions general if you can, but also allow ‘straight to end product’ functions.

Lesson Three: Can a computer make that decision?
When working with visual effects, it is a bit weird to suggest that a script can do all that work that the artists are doing. What the artists are doing requires visual decision making and subjective judgment of what looks good in a shot. The fact of the matter is, the script can’t think like an artist could. Here’s how a typical workflow goes: The artist does task A, then task B then task C … Z
A – B – C – D – E – F – G – H – I – J – K – L – M – N – O – P – Q – R – S – T – U – V – W – X – Y – Z
Then the technical director goes, ‘hmm, you just pushed all the same buttons in a sequence there, let me make that into one button for you’.
Now the workflow is like this:
A – C – E – G – I – K – M – O – Q – S – U – W – Y – Z
That’s how it stays, because the artist has to make some decision at all those points.
But this is inefficient, so what is the answer?
Actually a lot of those decisions that need to be made, don’t need to happen at that particular time. A lot of them can be made much earlier. During the breakdown for the effects, a person can make the decisions like how big should a dust roughly cloud be for this shot, which explosion will fit the shot and many similar visual decisions like that, and the system can simply allow them to be made. Now the workflow for the artist is like this:
A – U – W – Y – Z
What a time saver! By simply making the decisions up front for as many of the variables as possible, we save a huge amount of work. The artist will still have to make decisions, and then treat the final composite, maybe even tweak the 3d shot, but we’ve moved all of the decisions to a few central points. For each thing we have to do, there might be different how far we can go towards Z, but our system should go as far as possible. To the artist, it should feel like they have an artist assistant in the system, not that it is ‘taking their jobs’.
Lesson Three: Design A to near Z systems. Let the artist make decisions early.

2 comments

Leave a Reply to Marc Woodward Cancel reply