Alan Warburton’s Spherical Harmonics is a new project co-commissioned with Animate Projects for The Photographers’ Gallery media wall. Developed entirely using computer generated imagery (CGI), the film features an off-the-shelf female ‘model’ in an entirely digital set, which acts as a showroom for a sequence of surreal and mysterious events Ahead of his talk at the Gallery on 13th March, our digital curator, Katrina Sluis, interviews Alan Warburton to find out more about commercial visual effects, and how software is increasingly called upon to mimic the massive complexity of photographic ‘reality’.
Katrina Sluis: Your approach to the commission has been to explore the technical and cultural ‘coding’ of CGI. What is it that fascinates you about the form?
Alan Warburton: So many things! I’m interested in how it looks when it does what it’s supposed to do – when it’s shiny and perfect and sleek and realistic. Clients selling products love this look. None of the cars or shampoo bottles or trainers you see in ads are real, they are all these perfect ideas of products that CGI has made. Conversely, I’m also interested in how it fails – when it’s too perfect or when it goes wrong.
I’ve fallen for CGI the same way lots of traditional photographers have fallen for the formal characteristics of the camera and film processing – like multiple exposure or flash or the graininess of a certain film. For me, CGI is filled with formal qualities that define what it is. For example, when you light a scene in a 3D animation program, you get the choice of a few primitive geometric light ‘emitters’. Spherical lights show up in the scene as primitive glowing orbs. There are no light bulbs in CGI! And these glowing orbs can be hidden but still emit light, making them invisible light sources. That’s just not something that exists outside CGI. Lights without a source! Copies without an original! Effect without cause!
KS: Whilst there has been a 20 year discussion around the Photoshopping of the images which scroll past us every day (most recently the debate around Lena Dunham’s Vogue cover), most people are still unaware that the slick images of the advertising world are entirely synthetic – even the most banal images, as you point out. In a panel last year at the Gallery, Rainer Usselmann explained the impact CGI is having on commercial photography – and noted how photographers with the specialist skills to light and shoot cars are a dying breed. Previously, an ad campaign may have involved the expense of sending a photographer to the Namib desert, and shipping a over a freshly manufactured car from Europe, at eye watering expense. Today, a desert image might be bought from a stock HDRI (High Dynamic Range Imaging) library instead, and a CG model of the car (which has not even been made yet) might be sent over from the car manufacturer to create the ‘photograph’. Rainer spoke about how photography is no longer versatile enough for the needs of commercial image production – a CG composition has become a flexible engine for the production of multiple photographs, visualisations and videos from a single source file.
AW: First off, your point about Lena Dunham’s Vogue cover is only the tip of the iceberg. Think of the recent controversy over the press photographer struck off AP’s register for doctoring an image of a Syrian rebel or the Reutersgate scandal in 2006. Or even the viral created by Canadian animation students a while back that faked footage of a golden eagle swooping on a child in a public park. If you look at all these examples together, you see that CGI is not just affecting body politics and global politics… in the case of the golden eagle, it’s our sense of wonder. The currency of the photographic image as reportage depends on its veracity and that is now totally unstable. It’s almost like the more shocking or beautiful or meaningful an image is the more we will come to distrust it. That’s the most important thing about CGI: it likes to insert itself into places where it can co-opt the willingness to believe, which makes it a political tool of huge potential power. My concern is that it will devalue truth to the point that nothing seems real, especially the special. One the other hand, I’m reminded of a trip I took to work on a big CGI project in Beijing, where someone told me that even though the local markets were flooded with knock-off Burberry bags, people still made expensive trips to Hong Kong where they could spend thousands of dollars on the real thing. In some cases indistinguishable from a fake, the authentic retained a power that no counterfeit could touch. So maybe CGI counterfeiting is raising the value of the real?
Your second point is more to do with the commercial qualities of CGI, namely its flexibility. What you suggest is true – CGI does have the edge on traditional product photography, but I think it’s dangerous to assume that the rise of CGI is solely due to how easy and flexible it is to produce multiple versions of something. Sure, it’s much easier to ship a 50MB digital file online than ship the real car to the Namib desert, but one of the misconceptions about CGI is that the only reason it’s used over real photography is that it’s cheaper and simpler to iterate. The real difference is that the Namib desert is harsh and unfriendly and dusty and you might have to live with the shots you get when you’re out there. Whereas the post-production studio can reshoot, refine, relight, rerender to a point way past what was acceptable for a photographer. There’s less acceptance of messy, chaotic reality. Even CGI dust clouds are finely tuned with stray particles excised by hand. My point is, more control means more work and more skill, not less. And often, CGI produces weird, empty-feeling images where nothing accidental ever happens. This feeds into our ideas of aesthetics, creating a set of platonic ideals. An index of phenomena.
Alan Warburton, Assets (ongoing series)
This index is evolving, too. An experienced CG artist can spot stock assets in films and TV title sequences and ads. That sunset. That flickering flame. That smoke trail was made in FumeFX, that confetti comes from Maya, those titles were rendered in Cinema4D. It’s a hothouse of evolution where good assets are reused and then our ideas of what smoke should look like, or how a building explodes, or how a car should reflect a sunset, or how cartoon characters walk become fixed – idealised. Good models survive and are reused and refined, creating this kind of archetypal library of 3D assets and techniques, free of their original context. I’ve explored this in my Assets series, which documents various stock 3D assets that exist in readymade form online. They are divorced from their original contexts and somehow exist purely as dematerialised ideals, though you can never quite wipe them clean from their origins.
KS: In Spherical Harmonics, you use a number of stock ‘digital readymades’, including the ambiguous character ‘Maya’. Can you tell us a little about her role – and the process of constructing the film itself?
AW: Maya is like the eye of the storm in Spherical Harmonics, an anchor for the action. She’s somewhere between a protagonist, a muse and a prop. She’s almost like a placeholder for a memory – an understudy for a real memory. I don’t know if she’s based on a real person, but her skin textures and characteristics suggest that the artist who modeled her had a lot of real-word reference. I selected carefully when casting my model – Maya was one of only a handful of models online that weren’t ultra busty caricatures.
Working images from Alan Warburton’s Instagram Takeover for @thephotographersgallery
I sourced quite a few stock 3D assets online – in the same way a filmmaker or photographer might source props. Once you download these assets, you usually find that they are poorly constructed or have been saved in the wrong file format, that they don’t deform well or are provided in disorganised pieces like a box of flat pack odds and ends. So there was a lot of custom work getting the props render-ready. I spent about a week sourcing, cleaning and modeling assets, a week roughing out a structure for the piece, a week rigging, skinning, lighting, texturing, simulating and animating and then two weeks preparing the 20 or so separate scenes for rendering at various render farms. It took between 5 minutes and 2 hours a frame to render, depending on what was in the scene. The final step was to assemble all the renders together in a 2D compositing program. In total, about 6 weeks work produced about 200,000 frames, 1TB of images from about 100 computers rendering over 3 weeks. If one computer had rendered everything, it would have taken about 18 months.
The project was seriously intense. Animated films usually take months or years to complete, whether that’s hand drawn animation or CGI. Making something like Spherical Harmonics in six weeks meant I had to plan everything (including spontaneous decisions!) and babysit the computer to make sure that what it was calculating was correct. I’d often leave the software rendering overnight and wake up every couple of hours to check it hadn’t crashed. I don’t think people quite realize that CG artists put so much labour and commitment into their work. The tolerance for error is so low and the likelihood of error is so high that CG artists can go for months without seeing their families and friends. The reason VFX artists are striking and unionizing all over the world isn’t because they don’t love doing the work, it’s because their work is chronically undervalued. Maybe it’s because valuing something that is supposed to be invisible is so difficult? There’s this idea that good CGI should disappear into the photographic image seamlessly… I think sometimes the complications of labour disappear with it.
KS: This question of digital labour is explored by Adam Brown, in an essay which accompanies the commission. He asks how it might be possible to introduce resistance into the frictionless world of CGI where ‘dirt is there because someone put it there’ and ‘flocks of birds forget to shit like real birds’.
AW: I really enjoyed Adam’s perspective on the piece. Introducing dirt and imperfection into the perfect world of Euclidean geometry can be the most intensive process in CGI. I try and hold back on that urge wherever possible.
Installation View of ‘Spherical Harmonics’ on The Photographers’ Gallery media wall
KS: One of the challenges of the commission involved working with the specificity of The Wall at The Photographers’ Gallery – a video wall that is both seductive and Orwellian, located in the heart of London’s postproduction scene and a stone’s throw from the ubiquitous retail screens of Oxford Street. How did this inform the project?
AW: From the start, I was aware of the specific challenges of The Wall. There’s nowhere else quite like it. I knew that I had to aim for something that entered into a dialogue with photography whilst also acknowledging the site-specific nature of the gallery.
I think that in many ways, Spherical Harmonics almost sits stylistically at a halfway point between the glossy product displays of Oxford Street and the workhouse construction of Soho’s post production scene. It’s a fantasy under construction. It’s a play between glossy surface and behind-the-scenes complexity. It advertises beauty but undercuts that beauty in a way that suggests it’s unstable or subject to the whims of an invisible authorial force.
Stills from Alan Warburton, Spherical Harmonics (2013)
KS: In your blog you have discussed the work of other artists – from Simon Starling to Richard Kolker who are using CGI – and distinguish between artists using CGI invisibly and those who show the ‘seams’. In your previous work, such as Z, you have specifically used and transformed overlooked image formats from the CG pipeline which rarely get seen outside the postproduction studio. Can you tell us what you feel are some of the key problems, issues and opportunities for the medium’s use outside of a commercial applied context?
AW: I think that CGI is incredibly important in many ways. As an ideological tool it is powerful, versatile, ubiquitous and (most importantly) it obscures its own origins. I’m pretty sure that anything powerful that is designed to be invisible should be made visible! That’s one of the reasons behind my distinction between artists who use CGI invisibly and those who show the seams.
In a broader historical context, I think CGI is crucial to the development of computing. World War 2 is credited as the flashpoint for the first big boom in computing, and I think the computer graphics industry at the turn of the 21st century is the source of the second. That sounds a little unlikely, but here’s the rationale: a company like Nvidia, which has gradually evolved their GPUs (graphics processing units) in response to the demand from the CGI industries (films, commercials, games and architectural visualisation) for more accuracy and realism, is now responsible for the world’s most powerful supercomputers. The Cray Titan, only recently superseded as the world’s most powerful supercomputer, is one of many similar machines that run on Nvidia Tesla GPUs. These processors solve problems that nothing else can and they were developed in large part to compute CGI special effects.
For example, look at digital crowd simulation. It was used most recently to simulate millions of stampeding zombies in World War Z and was initially developed by Weta Digital for Lord of the Rings. The same type of crowd sim engines are now being used by the military, scientists and researchers to simulate real evacuations, emergencies and city planning models. Their work informs real world policy and real world emergency response. From World War 2 to World War Z, from real warfare to simulated warfare, this is the strange trajectory the second wave of computing has taken. It’s an almost farcical process: CG artists faking water splashes are creating the conditions for scientists to understand how real fluid dynamics occur. It’s progress via reverse engineering. It reminds me of something I read a while ago – scientists say if you smile more you start to feel happier. I think we’re doing something similar with CGI. We’re faking phenomena and in doing so creating the conditions to understand the real thing. It’s all very Baudrillardian.
Now to the ‘overlooked image formats’ you mention in your question. The by-product of the CG pipeline is that the computer can interpret a 3D scene in any number of ways. I used the z-depth pass for Z. I’ve also used the normals, velocity, occlusion and diffuse passes in Spherical Harmonics.