IMG_0697.JPG

MR is cool

so why are we seeing so little of it?

Everyone knows what virtual reality is. At the very least, people know it’s something that involves wearing a headset, and experiencing something other than reality. “Mixed Reality” is a much less familiar term, and before diving deeper into it, I’ll provide a quick description of what I mean by it (the definitions tend to vary).

“Mixed reality” is when you film a real human interacting with a virtual environment.


Source: UploadVR

Source: UploadVR


Source: Tilt-brush trailer

Source: Tilt-brush trailer

My first introduction to mixed-reality was through this very cool trailer, made by Framestore CFC for Tilt-Brush, a VR based 3D drawing application made by Google.

At first, I thought it was merely a clever use of visual effects to depict the VR user-experience, meaning a team of visual effects artists took an existing pre-recorded video of the user and added computer graphics that represented (maybe very closely) the content that the user saw in their headset. And I happened to be right in that case. But right after, I saw this Fantastic Contraption Mixed Reality Gameplay Footage, made by Kert Gartner. Kert is one of the early pioneers of mixed reality (and was kind enough to help me in my early research), and his videos clearly show graphics taken directly from the virtual reality application combined with video footage of the user.


Source: Kert Gartner

Source: Kert Gartner

Real-Time Camera Tracking was never this cheap

What I found most exciting in Kert’s videos was how freely the camera was moving around the user, and how “planted” the user was in the CG environment – a feat that visual effects artists call “camera tracking,” which is traditionally done either using expensive infrared sensors mounted on trusses around a movie sound-stage, or at a later stage on the computer using high-end visual effects software. Even still, given how laborious it can be to achieve a seamless camera match, camera tracking is often avoided in low-budget productions.


Source: SteamVR setup tutorial

Source: SteamVR setup tutorial

The cool thing about moving the camera around a virtual reality user, I realized, is that the same head tracking technology that is crucial for a room-scale VR experience (the small infrared sensors dotting the HTC Vive headset and controllers) can be utilized to accurately track a camera as well. In layman’s words, camera tracking is a byproduct of any room scale VR setup. Moreover, it works in real-time, with extremely low latency.

Kert actually documented his process in a few cool behind-the-scenes videos that greatly inspired me.


Source: Kert Gartner

Source: Kert Gartner

Around that same time, a producer came to me asking if I could create something similar to the Tilt Brush trailer (using the Tilt Brush app), and that’s when I decided to roll up my sleeves and give this process a shot myself.

The basics of shooting mixed reality content

I had very recently bought my first VR kit, HTC’s Vive which includes a headset, two “Vive controllers” (the VR equivalent of a joystick) and two “base stations” (infrared beacons used to pinpoint the headset and controllers’ position in the room).

In order to track my physical camera using the Vive system, I would have to attach one of my controllers to the camera and have the computer move a “virtual camera” based on that controller’s’ position and orientation (aka “transformation”). Luckily, just a few days earlier, a third Vive controller arrived in the mail (I ordered it after incapacitating one of my original controllers by thrusting it into my living room wall while playing VR). In the meanwhile I had managed to fix my broken controller so I now had three working controllers. Coincidence?


pasted image 0 (1).png



pasted image 0.png

It sounded simple in theory, but there were a few immediate obstacles:

  1. The controller and camera couldn’t possibly be in the exact same position. Even when attached, they are inherently next to eachother, and that positional offset would need to be matched somehow by the corresponding virtual camera.

  2. The Vive system didn’t detect the third controller via Bluetooth like it did the first two controllers, only via USB cable (a hardware limitation).

  3. I wasn’t sure how to get the VR application to produce a “camera view” feed; I just knew, based on the videos I saw, that it’s possible.


Source: TribalInstincts YouTube channel

Source: TribalInstincts YouTube channel

To overcome the first obstacle, I used Tribal Instincts’ brilliant “configurator” tool. Tribal Instincts is a Seattle-based software developer who runs a popular YouTube channel where he reviews VR apps and discusses technology. He created a handy Unity3D-based application that streamlines the process of correcting the offset between the camera and the controller. You still have to eyeball it to a certain extent but it saves hours of blind trial-and-error.It also has useful instruction on how to actually feed that offset information into the VR program so that it uses it properly.

Connecting the Vive controller wirelessly required buying a Steam controller wireless receiver, and following some steps described in this excellent article written by Kert.


Source: unknown

Source: unknown

And once the VR software detected the third controller, I was pleasantly surprised to discover it automatically switched to a hidden “Mixed Reality” mode, splitting the screen into four quadrants, each containing a video layer that can be used by a compositing software to bake together a mixed reality video.

Using a third controller and roughly calibrating it to my USB camera, I was able to film my very first Mixed Reality video.


Source: Outpost VFX

Source: Outpost VFX

It was very rough indeed. I had no green screen so I had to key my subject (my girlfriend at the time) against the white wall which is never a good idea, and I literally held the USB camera and controller together in my hand to keep them somewhat aligned. But I could see the potential and I knew that with a proper green screen and camera rig, I could pull it off.

Unfortunately, the producer who reached out to me was on a very tight schedule and had to go with a non-mixed reality approach.

From catching up to exploring new grounds

Now that I made a promising first step, my appetite for properly capturing mixed reality grew, and I started reaching out to colleagues who had access to green screen stages.

One of them picked up the glove and let me use a small green screen stage at Disney Interactive for my next go.


Mixed Reality @ Disney Interactive demonstration

Mixed Reality @ Disney Interactive demonstration

While testing mixed reality on various VR apps on that stage, my host brought up a need he was hoping to achieve: capturing two VR users or more, occupying and interacting in the same VR space.

This challenge inspired me to contact another colleague, a producer at the YouTube Space in LA, where I knew they had two green screen stages next door from each other.

My first goal was to try and create “co-presence,” meaning the two users would be physically separate, filmed from matching angles so when I combined the two camera feeds with the virtual environment, they’d look like they’re sharing the same space.


Mixed Reality co-presence demonstration

Mixed Reality co-presence demonstration

I chose “Eleven:Table Tennis” as the VR application to test on because it had a multiplayer match mode and an inherently fixed play area that doesn’t let users “teleport” (change their position in space without physically walking). It was also very helpful to have live communication with the developer as some last-minute code adjustments were required to get the mixed-reality to work properly in multiplayer mode.

Is Multiplayer VR with a freely moving camera possible?

Because both cameras had to be perfectly aligned for the illusion to work, I had to put both cameras on tripods and keep them static; any movement of one camera that wasn’t immediately matched by the other camera would break the illusion that they’re together in the same space.

Of course, being able to move the camera freely was part of the original allure of mixed-reality for me.But the only way to live-sync the motion of two cameras in different spaces would be using two robotic arms which mimic each other’s movements perfectly and simultaneously (and I didn’t have freely available access to two robotic arms).

The only practical way I could move a camera freely while capturing a multiplayer VR session would be if both players shared the same physical space. This is something that is definitely possible and successfully implemented on a daily basis by VR arcades such as ILMxLabs’ The Void and Imax VR. But most VR applications didn’t offer a simple way to synchronise two separate VR headsets within the same space (since they assume home users have only one headset). Another limitation was the play area of the Vive being limited to around 15’x15’ – so a game like “Eleven,” where the opponents are separated by a virtual table, wasn’t an ideal testing bed.

But there was another VR game that I thought might be a perfect candidate for such same-space multiplayer experiment, Racket:Nx, developed by long time colleagues of mine at One Hamsa studio.


Source: Racket: Nx Mixed Reality Trailer

Source: Racket: Nx Mixed Reality Trailer

Racket:Nx is a sci-fi enhanced racquetball simulator where the opponents stand next to each other and take turns racketing a ball towards a 360 degree dome wall, with the goal of hitting targets, collecting power-ups and avoiding traps.

I had been keeping One Hamsa posted on my experiments (earlier one even featured their game), and they became similarly excited about possibly capturing their multiplayer mode in mixed reality.

Coincidentally, one of the organizers of VRLA,  an annual VR conference in LA (who came out to help with my co-presence experiment and even appears in the demo video herself) had kindly offered us a last minute booth space at the upcoming VRLA conference.

The problem with existing VR demonstration booths

I felt like VRLA would be the perfect venue for an interactive demonstration of a multiplayer virtual reality gaming experience using mixed reality, especially given my last year experience attending the conference.


Source: VRLA Winter Expo 2016 Recap

Source: VRLA Winter Expo 2016 Recap

Being a lifelong fan of gaming technology and equally enthusiastic of this incarnation of virtual reality, I was ready to be mind blown by the exhibit floor attractions. Instead, when I got there I witnessed long stagnating lines leading to small dark rooms where people stood, one at a time, with headsets on. Rarely did I get a glimpse at a monitor showing what they were seeing, and it was very difficult to gauge whether they enjoyed the experience. I figured I’d wait for someone to exit a booth and ask them if it was worth it but, after waiting for a few minutes next to one of the booths, I gave up and moved on to see things that didn’t require waiting in a long line.

I was disappointed, not so much for my own experience, but because I knew this kind of first impression could be destructive to the VR community – especially at such early stages of market infiltration. I felt there was an urgent need to show the VR community better ways to “tell the story of VR,” and especially the largely overlooked (yet very promising) multiplayer VR experience.

A better way to tell the story of VR

By coordinating between One Hamsa and VRLA, I was on way to create the first of its kind, VR multiplayer mixed reality demonstration booth. The road was filled with organizational and technical challenges, and required financing from Keshet International,  technical and labor support from Machinima Studios, as well as the supportive crew of VRLA and Blaine Conference Services.

But throughout the preparation, and despite logistical and financial limitations, I fought to ensure our booth would check three basic boxes:

  1. It would have moving-camera mixed reality, demonstrating two users in a multiplayer game.

  2. The mixed reality video would be VISIBLE and PROMINENT, not only to the front but to the back of the line and preferably beyond.

  3. It had to be FUN and EXCITING, and feel like a real sport match between two participants.


VRLA 2017 Racket:Nx Mixed-Reality Booth design

VRLA 2017 Racket:Nx Mixed-Reality Booth design

We made it fun and exciting by:

  1. Printing sleek colorful posters

  2. Making sure the booth was properly lit in the relatively dark showroom

  3. Covering the headsets and controllers with colored protectors in the player’s color-ID (red vs. blue)

  4. Adding additional LCD screens displaying the score dynamically changing during each match.

  5. Setting up a strong sound system with an engaging score and automatic voice-over reactions to impressive moves, big game events and winning/losing.

We made the mixed reality feed prominent by playing it on two big LCD screen, which were mounted on a 12-foot truss right above the play area, which were facing the two prominent pathways leading to our booth.

And we made the mixed reality work, with camera moving and multiplayer mode active, by tackling a variety of technical obstacles, one at a time, not only before but even during the 2-day event itself, with the programming crew preparing a fresh update between the first and second day (luckily they were on a 10-hour time zone distance so our night was their day).

When the doors opened to the 2017 VRLA expo, our booth was standing tall, ready to welcome the thousands of attendees. Our position at the very front of the showroom next to the main entrance meant either our success or failure would be displayed in full glory.

Luckily, it was a success.


VRLA 2017 Racket:Nx Mixed-Reality Booth sizzle clip

VRLA 2017 Racket:Nx Mixed-Reality Booth sizzle clip

Within minutes, a crowd formed in front of our booth, not only of people eager to play the game, but of bystanders who weren’t in line but simply watched the mixed reality feed on the TV screens above. In fact, the crowd of spectators and the line to play in the booth were so big that we were asked by the organizers to use set up stanchions and prevent the line from blocking the entrance to the hallway. That was definitely a nice problem to have, and a good sign that we were doing something right. During the event, I went around and shot some crowd reactions, as well as behind-the-scenes timelapses of the creation of the booth, which we ended up editing together with mixed-reality gameplay captured in the booth into a short sizzle clip.

Sidebar: Technical difficulties and how we overcame them

The programmers of Racket:Nx really went out of their way and coded a special “MR” build of the game for the VRLA exhibit, constantly adding functions that improved the quality and ease-of-use in capturing the game for mixed reality. Here are only some of the technical issues they were faced with:

  1. By default, VR desktops assume there’s only one user in the play area so they position the player at the center of the room. A second VR desktop sharing the same play area would also position its user at the center – so although in VR they’d appear to be at a safe distance from each other, in reality they’d be standing in the same place.
    Solution:
    We created a tool that calculated the physical offset each player should have from the center in order to match their in-game position, and than calibrated both desktops’ play areas’ position and orientation to sync properly with the game.   

  2. The original distance between the players in Racket:Nx placed them too close to the edges of our 10’x20’ booth (at risk of swinging their arms into the green walls).
    Solution:
    The Racket:Nx team created a new dynamically changing level-design that allowed the distance between the players to be adjusted manually during setup.

  3. Shooting with a very wide lens was required to capture the dynamic action, but also meant the edges of our green screen would become visible, and reality would “creep” in around the virtual reality feed.
    Solution:
    We implemented a technique that “live-cropped” the camera feed around our players using dynamic “matte boundary-boxes” that followed the players’ position on screen. In other words, the VR background became the VR foreground wherever the game-engine determined players aren’t present.

  4. Racket:Nx gameplay let players hit targets on a 360-degree dome surrounding them, while our booth’s green screen limited our camera view to 180 degrees of the action. Thanks to live-cropping we could turn the camera around without seeing the showroom crowd, but we preferred to capture players and targets together as often as possible.
    Solution:
    The special MR build placed targets only in one half of the 360-degree dome.

  5. There was a visible sync offset between the camera feed and the computer feed. Ironically, it was the reality part of the feed, meaning the video camera signal, which had a small latency rather than the computation heavy virtual reality feed, which came from the computer in real time.
    Solution:
    The MR build received a frame buffer option, which allowed us to delay the MR feed to match the camera feed’s latency. In fact, since that latency wasn’t always the same, we could dynamically adjust that delay on the file mid-game, to fix any subsequence sync issues.

So…why aren’t we seeing more of this?

At the exhibition floor, our booth was an instant hit. Fellow presenters and VR professionals reacted very positively to the activation and its effectiveness in showcasing the multiplayer experience. However, as inspired and impressed as they were, it was difficult for many to see how to apply this in promoting their own VR experiences.
It might have been connected to the fact that there weren’t many sport-centric VR titles presenting at VRLA (Maybe the LA crowd is more narrative oriented, or the VR industry as a whole is trying to distance itself from the video-gaming market and its dominant leaders).

Among the few who did inquire about using mixed-reality to promote their titles, few could afford a one-day mixed reality production session, let alone a multi-day exhibit floor booth.

While writing this post, I reached out to Kert Gartner, whose early mixed reality trailers had inspired me to get into the field, and asked about new mixed reality work. His answer:

“I haven’t created any new Mixed Reality trailers since they’re such a pain to set up, and I don’t think they’re necessarily the best way to showcase a VR game. I’ve been focusing more on avatar based trailers, which are much simpler to produce and in a lot of cases, produce a much nicer result!”


Virtual Rickality avatar based trailer. Source: Kert Gartner

Virtual Rickality avatar based trailer. Source: Kert Gartner


LIV setup tutorial. Source: Thee VR bros

LIV setup tutorial. Source: Thee VR bros

I share Kert’s experience of it being a pain to set up mixed reality. However, VR developers are already creating new tools for standardizing mixed reality both on the developers’ and users’ ends. In researching for this blog post Matan Halberstadt, one of Racket:Nx’s developers, referred me to LIV, a VR tool that “everyone is using today for streaming mixed reality. It’s much simpler to setup and it replaces OBS by compositing the 4 quadrants”.

Is part of the allure of some virtual reality experiences, the fact that we get to embody virtual avatars who look nothing like us, and therefore the action of mixing a real image of ourselves with the virtual reality experience undermines what we’re trying to sell? I can see that being the case in some cases, but definitely not all cases, especially when it comes to VR e-sports like Racket:Nx.

Perhaps being able to see users inside the virtual environment doesn’t cut it so long as we can’t see their facial expressions? After all, there’s something alienating about not being able to see a person’s eyes. Google Machine Perception researchers, in collaboration with Daydream Labs and YouTube Spaces, have been working on solutions to address this problem wherein they reveal the user’s face by virtually “removing” the headset and create a realistic see-through effect:


Google Research and Daydream Labs: Headset Removal

Google Research and Daydream Labs: Headset Removal

No doubt this feature will be available to mixed reality creators sometime in the near future. Will this open the flood doors for mixed reality content?

Do we see much of VR at all?

The topic of virtual reality’s sluggish market growth fuels hot debates and keeps most VR developers up at night. At least as of late 2018, virtual reality is still struggling to build a user base large enough to justify significant investment in content.

From the very beginning (of this incarnation of VR), market observers pointed out a chicken and egg paradox, where users were assumed to be waiting for more VR content, and VR developers were assumed to be waiting for more users. However, over the past couple of years, the VR marketplace has seen constant growth in content, including A-list game titles like Fallout 4 coming out with VR versions, and yet the general population, including gamers, still don’t seem all too motivated to get desktop VR headsets into their homes.

I mentioned at the top of the article that most people know VR as something that involves wearing a headset and experiencing something other than reality. Perhaps being able to look down at our smartphones, interact with friends who came to visit, or simply look after our kids, is more important to most of us than being fully immersed in entertainment and blocked from our real surroundings. Maybe we’re still waiting for that game-changing VR tool that will tip the scales and convince people that the rewards justify the sacrifices.

Maybe it’s simply that we haven’t figured out how to properly market VR? One of the challenges in introducing VR to the world is that unlike a new film, which can be previewed on any TV screen, tablet or smartphone, there is no way to convey the essence of a virtual reality experience without being in VR and getting a taste for it.

This is another chicken and egg paradox that has made some developers focus on VR arcades for the time being. In the meanwhile it seems as though headset manufacturers acknowledge the importance of mixed reality composites in their promotional materials.

Magic Leap, which recently revealed its long-anticipated augmented reality (AR) headset, is betting its entire existence on the idea that users prefer their virtual-reality content to co-exist with their surrounding reality. Not surprisingly, in the trailer for their demo-experience “Dr. Grordbort’s Invaders”, the final sequence mixes real and virtual elements to illustrate the user experience.


Source: Leapcon 2018

Source: Leapcon 2018


Source: Leapcon 2018

Source: Leapcon 2018

I found it strange that the trailer chose to point out that it’s a single-player experience, by showing the player’s friends mocking him as he fights his “imaginary enemies” which they can’t see.

Even worst – the player is depicted as being oblivious to his friends’ mockery, thus short selling the key advantage AR has over VR – maintaining users’ connection to the physical world and their friends. Being able to see each other and experiencing VR together (perhaps competing against each-other in the living room), is easily the single feature many current VR holdouts are waiting for. Taking a well-known achilles heel of desktop-VR and suggesting that it persists in AR as well (where in fact it technically doesn’t) is nothing short of self-sabotage.

Alas, as with other innovative tech-products in the past, the technology will mature and someone will inevitably crack the formula and achieve mass-adoption. There’s little doubt that whatever VR platform will ultimately take off, mixed reality will play a big role in transforming it from a niche gadget to a much-desired entertainment product.

Until that happens, mixed reality will continue to evolve as a “virtual cinematography” tool for filmmakers, and an interactivity tool for video artists, who are sure to inspire us with ever more creative utilizations of this widely accessible and incredibly powerful tool.


Source: Giphy

Source: Giphy

In part one of this post I explored workflow challenges resulting from how conceptually different 360-video VR is from everything I’ve worked on before in my career as a VFX supervisor. In this part I’ll be diving into technical challenges resulting from requirements and limitations unique to 360-video VR, which I’ve experienced while working on a 360-video web-series for YouTube.

(Part one includes a clarification on 360-video versus VR, and how this article focuses on 360-video production).

While trying to keep this short and digestible (And admittedly a bit provocative) I might have generalized and omitted reservations while sounding somewhat authoritative. This article represents my opinion and limited personal experience and knowledge – if you find any inaccuracies please mention them in the comment section of this blog.

WHO NEEDS 8K RESOLUTION

360 Video is a strange beast when it comes to resolutions. While it’s a common argument that 4K video doesn’t offer a drastically improved viewing experience over HD video, a 360 video presents the viewer with a very limited portion of the entire video that was captured. So when a 360 video is shot in 4K resolution, the viewer only sees about 480p resolution, which is about a quarter of an HD image. In order to experience an HD-level sharpness when viewing a 360 video, the resolution of the entire surrounding video must be around 8K or even more.

In our case, YouTube wanted to use our videos to test their new 8K streaming capabilities, so we had to work at a resolution so high that even our high-end workstations couldn’t playback in realtime. Below you can see a final frame in its entirety – shrunk to a fraction of its original size.

Here’s a portion of the shot in 1/1 scale:

While the huge-resolution demands of 360 videos are a major barrier to be reckoned with (I’ll get to them soon), there were other technical challenges we were facing straight off the bat.

WHERE ARE ALL THE PLUGINS

Most VFX supervisors who work in small to medium-sized VFX studios rely solely on off-the-shelf software and plug-ins. They save time by offering shortcuts and automations, and provide solutions to known problems.

The growing abundance of visual effects tools has made us somewhat spoiled and lead us to expect to find a tool for any need. Since VR is so new, publicly-available tools that cater to specific needs of VR are still scarce. The few that have come out, like Skybox Studio, suffer from first generation limitations and bugs.

As a result there are many key components missing from a proper VR workflow:

“Immersive” preview in VR. The ability to see what we’re working on is fundamental, just like a painter must see the canvas in order to paint on it. Similarly every VFX application have a monitor window which displays the final output. Even if we can’t always adjust the content in real-time, we do generally work on the “output canvas” – meaning we can preview how the audience will see our work. In 360 VR the final output is viewed via headset, yet the production workflow is still on a traditional computer screen.

The closest thing I’ve seen to “editing VR in VR” is the following demonstration by Unreal engine (But this is only for game-engine VR, not for video-360).

Until that changes, we are forced to use computer monitors, and run intermediate previews in our VR headsets. This will likely be solved soon enough, and some plug-ins like Skybox Studio let us place a virtual camera inside the 360 sphere and see its point-of-view, which is pretty good for the time being.

Client review in VR. In a previous post I offered some tips on giving notes on VFX effectively, using a variety of existing tools like Cinesync, Shotgun Review and others. Even with these tools in existence many clients stick to writing notes in E-mail, which often leads to confusion and misinterpretations and can cause unnecessary revisions.
If vague notes like “The tree on the left should be bigger” are frustrating, imagine similar notes given to a VR scene, where there is no consistent frame to rely on and “left” is not contextualized. Furthermore, when sending a VR sequence to review, you can’t ensure the client even sees all important elements unless you lock the viewer’s orientation, preventing the client from looking elsewhere.

Implementing a new vocabulary for screen-directions in VR can help, but it would be even better to have a client review tool inside a VR headset – letting clients review 360-videos and insert notes directly using motion controllers.

The closest thing I’ve seen which demonstration this capability is this Fabric-Engine demo (Also limited to game-engine or “desktop” VR, but shows the functionality I’m talking about)

While there are several online 360-video players (Like YouTube) that allow clients to review your work, I haven’t yet seen a review app that will let clients insert notes directly into a 360 video while wearing a headset. Until then, the best solution is probably using a combination of a 360 video web-viewer, and a 2D video review tool like Shotgun’s review tool.

Stereoscopic editing in VR. Many VFX studios have garnered experience working on stereoscopic effects, but since stereoscopic video hasn’t managed to dominate the home-entertainment market, most VFX houses who don’t work on feature films haven’t upgraded their workflow to support stereoscopic VFX.

By reintroducing stereoscopic imagery, VR is blessing us with yet another layer of complexity and dependence on tools that haven’t fully matured yet.

Interactivity Authoring for VR. Because of how personalized the viewing experience is in VR, new opportunities open up for storytellers to explore, namely the ability to modify the experience in real-time based on the user’s actions. For instance, delaying certain events until the viewer turns to view them. This is especially effective in horror-themed VR experiences, where you want things to “jump around a corner” just as viewers turn to that corner.

Even though it has more to do with programming and editing than visual effects, it can still be requested of you and you may chose to accept the responsibility. Such interactivity is relatively common in computer-generated real-time VR experiences, but rarely exist in 360-video. This is because the few authoring tools that do exist, such as the “Unity 3D” game-engine, are geared towards real-time VR applications rather than high-resolution 360 video. While very capable, they currently only offer ground-level functionalities that require further development.
Even if the task of creating interactivity ends up being someone else’s responsibility, creating VFX for an interactive experience still offers some challenges. Events triggered by the viewer often call for previous events to loop indefinitely, creating the need for “loop-able” effects – something relatively uncommon in traditional VFX.

WHY DO OUR SYSTEMS KEEP CRASHING

Specialized tools are always nice to have, but when it comes to the most basic tools, they are so engraved into our process that we often forget they exist – until they break.

Most of our trusted tools were initially developed for Standard resolution, then upgraded to HD resolutions, and then to 4K and 8K resolutions. While most modern applications do support high resolutions, there are still a lot of issues and bugs that appear when we load an 8K footage into an HD-minded workflow.

Video Codecs break. In our project, we used a 360-video camera rig called GoPro Odyssey, which uses 16 GoPro cameras to capture a stereoscopic panoramic video at up to 8K resolution. This video then gets compressed into an MPEG-4 file with a bitrate of 600 Mbit/s. At some point black patches started appearing at the bottom of random frames when we played these in Adobe Premiere or Adobe After Effects. These artifacts were inconsistent and would sometimes disappear from some frames and appear on others. We didn’t have time to deal with this problem and ended up manually re-rendering problematic frames. I assume it was a decoding issue related to reaching a buffer limit before the entire frame finished processing, but it’s only a guess. The bottom line is, after years of relying on a codec to the point of forgetting it even existed, it suddenly broke – and caused the need for some manual clean-up we had no way of anticipating.

Nonlinear 6K editing cause application crashes. Layering videos on top of eachother is an essential part of video editing and especially video compositing. It’s so integral to the VFX workflow that we expect to be able to layer any media we work on. But when working in huge resolutions adding a second 6K video on top of the first 6K video can cause a software to freeze-up or even crash. Editors often work on proxy files to get real-time playback, but VFX artists are often forced to work “on-line”, on the final resolution. One way to speed up the workflow and avoid crashes is to work in “patches” and then combine them into the full-frame once all heavy lifting is finished. But when a visual effect wraps around the entire sphere this solution might not be possible, in which case using proxy files or committing to lower resolutions may be inevitable at this point.

Multitasking can freeze the system. Remember a time where you could only work on one 3D software at a time, and loading another software would crash the system? Well these times are back. Even software designed for a parallel workflow like Adobe’s “dynamic link” seemed to lose its stability when working on 6K files. We ended up having to stick to one software at a time throughout the production to avoid system crashes.

New “render tasks” emerge. Obviously the higher the resolution the longer renders become, but there are certain processes that are so brief we normally wouldn’t factor them in as “render tasks”. Things like combining image sequences into compressed video files or generating half-resolution previews are fast enough for users to run on their local system and wait until they’re done to resume work. In 6K resolution or more even those quick renders that are usually done locally may slow down to the point of becoming “render tasks.” Adding another step to a workflow that is already much slower than we’re used to.

Big files might choke file servers. We had several editors and VFX artists working directly off a SAN network storage, which at certain points suffered from atypical slowdowns.This reduced our systems’ response rates and cost us entire days of work.

Big files take long to transfer. Whether it’s via internet or a shuttle drive – copying multiple video files weighing over 100GB each, will take significantly longer than regular projects. Compressing and extracting files also may take unusually longer, and any compression or file-transfer errors can cause further delays.

Files aren’t playable in realtime. As briefly mentioned before, no matter how compressed, we weren’t able to play any 6K resolution footage smoothly on our systems. And until the hardware catches up, I suppose this is a limitation that will remain a hurdle.

Reliable tools become unreliable. Even heavily relied-on tools for platform-agnostic processes like fluid simulation, fur dynamics, particle behaviour, and various post-processes like radial blur, might hit processing thresholds you didn’t know of when introduced to the high-resolution demands of VR production.

Beyond these few examples, many other breaking points are awaiting on future projects. Until both hardware and software go through extensive stress-tests and stronger systems arrive on the market, everyone working in post production is underpowered when working in VR.

WHY IS THE ENTIRE PIPELINE UNDERPOWERED

VFX supervisors know that defining clear delivery specs (both to and from other post production peers) helps streamline work and save time. A proper workflow allocates time to test media conversion and transfer methods, to ensure they fit the bill before production runs into a deadline and a large number of deliverables must be handed over.

Given the technical stress that VR production puts on the entire pipeline (on the VFX workflow as well as the editing and color workflows) your peers will likely be struggling with similar technical hurdles and slow-downs, and might not be able to conform to your technical needs in a timely fashion. Things you ask for might arrive late and in the wrong formats, and asking an overwhelmed editor for new exports might be futile.

Similarly, as you are rendering and transferring your final shots back to the editor, time restrictions could prevent you from making necessary adjustments on time, causing frustration on the receiving end as well.

WHY DOES OUR WORK SUDDENLY SUCK

When so many things slow down simultaneously, the workflow gets disrupted as certain processes must be truncated to compensate for time lost. You may realize that for a task that normally requires at least three adjustments to get “right”, you only have time to do one. In our case this meant fewer revision cycles, which forced us to stick to simple designs and rely on basic tools we would normally use only as last-minute escape routes.

CONCLUSION: WE ARE PIONEERS

This post might read as a warning sign, but I see it more as a reality-check for anyone expecting working for VR to be a walk in the park. In a way I wrote this post for my past-self, the one who had underestimated the challenges of VR, suffered the consequences and lived to tell the tale. But truth be told, I remember a time when every VFX job was this hard and complicated. A time when tools would break left and right, and technical troubleshooting often took longer than producing art. The art of computer-generated imagery isn’t that old, and many of us who do it today have done it since it was just as new and untested as VR is today. So this is no time to moan and groan about our tools not being fit for the job. It is time to embrace the challenges and overcome them. Create new experiences which will hopefully be as inspiring to others as the ones that have inspired us in becoming VFX artists.

<- Go back to part one

As we near the end of 2016, current generation Virtual Reality remains a promising new medium that is far from reaching its potential. Whether it succeeds or fails in fulfilling its promise, many people in the visual effects community are racing to cater to the demands of 360-video virtual reality production.

For me personally, virtual reality is much more than a business opportunity. It excites the explorer in me by providing new uncharted territories and many unsolved challenges. So when Outpost|VFX was approached by YouTube to do VFX for several stereo-360 videos, I was immediately intrigued and eager to rise to the challenge.

Short turn-around time and a tight budget made it even more challenging, but didn’t deter me from signing up for the task. Though we ultimately succeeded and delivered all the visual effects to the client’s satisfaction, the road was full of obstacles that we didn’t anticipate. Over time these obstacles will inevitably be removed, but for now I hope this post will help newcomers by sharing some of the lessons learned during the process.

Before we begin, a brief note on the terms “360 video” and “virtual reality”:

As demonstrated here, 360-videos use the viewer’s head orientation only to display whichever part of the surrounding the viewer is directed at. The “surrounding” is essentially a flat image warped into a sphere. This allows you to look in any direction while being locked into a fixed point in space.

Virtual Reality experiences use both the viewer’s head orientation and position to place the viewer in a video-game-like virtual world. This allows more freedom of movement than 360 video, but content is currently limited to real-time CGI due to limitations of video-capture technology.

This post will discuss visual effects for 360-videos only. However, much like the rest of the entertainment industry, I’ll use the term “VR” as well when referring to 360-videos.

 

WHY IS VR SO CHALLENGING

In many ways 360 video is nothing more than a wide format. A 360-degree image is essentially two 180-degree fish-eye lens images stitched together. So it’s easy to assume that the main challenge for VFX in this medium is matching lens distortions and figure out how to review and give notes on effects in 360-degrees.

In reality, the differences between 360-degree and traditional video are so vast that they affect every step of the process in more ways than one. Entering this new realm often requires leaving behind our most trusted tools and fail-safe devices. Our ability to rely on past experience is greatly reduced and even the simplest VFX task can suddenly turn into a nightmare. It might sound like an exaggeration. Is it really that different? -well obviously it depends on the specific needs of your project, but here are a few things to keep in mind:

YOU’VE GOT NOWHERE TO “HIDE”

If you’ve been in the game long enough you know that VFX supervisors rely on a variety of “tricks” that save time and money. Many of them don’t apply to 360 videos:

Editing is usually VFX’s best friend. As this GIF demonstrates, efficient use of cut-aways can hide certain actions that are hard to create or imagine (like a monster grabbing the kid by the head and lifting him off the ground) while adding more value to the scene. Cuts in VR can cause disorientation if they’re not carefully pre-planned, which means they can’t be relied on as an escape pod late in the game. Instead, VR 360 videos tend to incorporate long uninterrupted takes that call for long uninterrupted VFX. Keep that in mind when calculating render-times.

 

 Framing as used in

Framing is another trusted support-beam for VFX artists. Traditionally the director decides how a scene will be framed, and can change framing to leave out certain challenging effects, or at least trim parts that are unnecessarily complicated. In the scene above Tarantino famously framed out the cutting of an ear, forcing the audience to imagine it instead – and demonstrated how powerful these decisions can be. By now we are so used to designing VFX for a 16:9 frame that we automatically label them as easy or complicated assuming they’ll be framed a certain way. There is no framing in VR. Well technically there is, but it’s driven by the orientation of the viewer who is free to choose whether to look away or stare directly at an effect. As result, any VFX is potentially fully exposed from birth to death – consider that when calculating simulation times for your next fluid-dynamic shot in VR.

 

Camera movements are often perceived as something VFX supervisors prefer to avoid, as they raise the need for camera-tracking and other processes. In some situations, though, moving the camera (whether physically or in post-production) can help sell an effect. A good example is camera shakes following an explosion: they make the explosion feel bigger by suggesting there’s a shock-wave, and at the same time reduce the visibility of the VFX and allow certain demanding simulations be avoided.

In 360 video camera movements can cause motion sickness-like symptoms also known as “VR-sickness” and therefore aren’t used very often. Even when they are used, camera movements in VR can direct the viewer’s attention away from an effect, but not always take it off-screen completely.

 

Optical Effects such as lens flares, depth of field, motion blur, chromatic aberration, film grain and others are commonly added on top of visual effects to make them blend better and feel photorealistic while masking areas of low detail. These optical effects are so entrenched in our VFX workflow that it’s easy to forget how “naked” or “empty” our effects feel without them.

The absence of a single lens, frame and variable focal-length makes 360 videos extremely crisp and sharp. Lens flares are possible but behave strangely if they aren’t dynamically changing based on the viewing angle, which requires a layer of interactivity that’s not standardized yet. Depth of field and motion blur are largely avoided because they impose optical limitations and disrupt the immersiveness. Image degradation effects like film grain and chromatic aberrations are rarely used as they create a “dirty glass” effect, which is more distracting in VR than in traditional film.

 

Eye Tracking is more of an optical analysis that drives aesthetic decisions than a technical tool. I’m talking about being able to anticipate what the viewers are likely to look at during a scene, and focusing more efforts in those places. This ability is greatly reduced in VR because viewers have a much wider area they can explore visually, and are more prone to missing visual cues designed to grab their attention. This means you are forced to treat every part of the effect as a potential point of focused attention.

 

 

 

 

 

ACTUALLY, YOU’VE GOT PLACES TO HIDE

While this list may seem intimidating, many of these restrictions are easy to predict and address when breaking down the script of a VR film. By being involved in pre-production stages of the project I was supervising, I was able to identify problematic scenes and offer suggestions that simplified and reduced the workload of certain effects.

For instance, the script had a character carry a magical amulet that was supposed to glow with light. Making a practical glow wasn’t possible so it became a VFX task. Had this been a traditional film I’d consider this a simple task that requires brief design work, some tracking and maybe a bit of rotoscoping. I would expect the amulet to make brief appearances in one or two close-up shots, a medium-shot and potentially a couple of long-shots. I’d be able to suggest framing and blocking adjustments for each shot individually to simplify tasks and save costs.

But this being a 360-video, I suspected the scene might be filmed in a continuous take, capturing the entire spherical environment – meaning the action wouldn’t be broken down into sizeable chunks, and we wouldn’t have editing to rely on as a safeguard. This effect would potentially be visible for the entire duration of the scene.
Anticipating this in advance gave us a great advantage. Given our time and budget constraints I recommended limiting the visibility of the glowing amulet either by making it glow intermittently or covering it while not in use. The director decided to have it stashed in a pocket for the majority of the scene – limiting the effect to when it was truly needed.

This saved us a great deal of work that wasn’t crucial for the story. It also served as an important thought-exercise that made the entire crew more mindful of similar pitfalls and on the look-out for similar opportunities, which brings up another challenge of working in this medium:

WHY IS EVERYONE SO INEXPERIENCED

Creating compelling Virtual Reality experiences has been attempted several times in the last few decades, but it wasn’t until now that the entertainment industry started taking it more seriously. Even established filmmakers are making their first footprints in virtual reality, and there aren’t many noteable VR films for inspiration. This makes every VR creator somewhat of an experimental filmmaker, working with a high risk of failure or undesired outcomes.
Whether done at your fault or against your advice, fixing VR-related issues could become your responsibility, adding new challenges to your work.

Here are examples for mishaps that may originate from lack of experience in VR storytelling:

Failing to direct viewers’ focus. 360 Video VR lets viewers look around freely. This freedom introduces the risk of missing out on key information. Guiding viewers’ focus through a 360-degree panoramic view is much harder than through a traditional screen. With tools like framing and editing either gone or severely limited, directors that never faced this challenge might fail to recognize the importance of preparation and pre-visualization. Proper workflow demands thorough planning of actors’ positions and movements, careful timing, visual composition that dissuades viewers from looking away, and various other approaches. Giving in to the temptation of positioning parallel action around the 360 camera can cause confusion and frustration for viewers as they can only look in one direction at a time.
Because live-previewing is not yet widely available, such mistakes might remain unnoticed until after the shoot is over when the director gets to review footage in a VR headset. By then it might be too late to re-shoot the footage, turning the problem over to post-production. Certain things can be fixed in post-production, by manipulating the timing of certain events, adding CG elements to grab viewers’ attention and aim them in the right direction, etc. But such fixes shouldn’t to be taken lightly as they are equally affected by the heightened complexity the VR medium introduces. Finally, in the case of an unfixable mistake, you could end up creating an effect knowing that a large chunk of viewers might miss it by looking elsewhere.

Remaining too stationary. I mentioned earlier that VR tends to incorporate long uninterrupted shots, and that camera movements tend to be avoided. This is mainly because early attempts to move cameras in VR caused nausea, and jump-cuts caused disorientation.
With the advancement of VR headsets, growing understanding of the causes of VR-sickness, and understanding of orientation cues, these side-effects can now be avoided without much compromise. This requires a bit of testing and prototyping, but the benefits are great compared to the limitations of a stationary camera and inability to cut. Needless to say, reintroducing cuts can help shrink the length of VFX shots, and moving the camera can further optimize viewing angles and save costs.

Inefficient visual vocabulary. Even experienced directors could find themselves out of their element when working in VR for the first time, and struggle to visualize the project they’re creating. Pre visualization tools are extremely helpful. If they’re not used for any reason, the director’s ability to envision their creation, let alone communicate it outwards, may be hindered.
This could not only disrupt your ability to plan and execute effects efficiently, but force you to backtrack and pre-visualize using final assets, while constrained to footage that’s far from optimal.

Relying on outdated concepts of VFX workflows. Generally speaking, directors who are familiar with VFX workflows are great to work with. In some cases they do your job for you by anticipating and dodging certain VFX traps in the early planning stages of certain scenes. Entering the VR realm alongside such a director can be a great experience of joint discovery. But over-confidence could lead a director to prep and shoot without a VFX supervisor, assuming the same rules that apply to traditional video apply to VR as well. Even if they follow every rule in the rulebook, without having researched and tested various tools and calculated workloads and bottlenecks, they are shooting in the dark. While it’s always best to err on the side of caution when accepting VFX tasks you didn’t on-set supervise, it’s especially true in VR.

Being overly ambitious. For all the reasons mentioned, and especially the ones that I’ll be getting to shortly, creating a VR experience is already an ambitious undertaking that can be surprisingly challenging and mentally taxing. But that’s not going to stop dreamers from dreaming, and you may end up working for a director who believes anything is possible. Therefore it’s important to communicate from the very beginning the compounded complexities involved in creating VFX in VR, and to prepare your collaborators to unexpected limitations that are going to be encountered.

EXPERIENCE WILL BE THRUST UPON YOU

Everyone is fairly new to VR and many of the restrictions and pitfalls are being discovered for the first time. The good news is that lessons are being learned and documented and a growing number of filmmakers are gaining experience. Old perceptions will inevitably be replaced by new ones, and our jobs as filmmakers and VFX creators will become easier as a result.

Beyond conceptual challenges, VR reintroduces a healthy amount of technical challenges as well. Even a fairly simple effect like adding glowing light-rays to an amulet can be quite time-consuming when you’re working in stereoscopic 6K resolution.

Part 2 of this post lists many of the technical challenges facing VFX creators working on VR productions.

Proceed to part two ->

 

Creating visual effects is a collaborative process. Visual effects companies, skilled as they may be, depend greatly on their clients’ ability to relay notes effectively – and ever so often clients are lacking in that respect.

As a visual effects supervisor I know that helping clients articulate their notes can often take as long as implementing them. Having sat in the client’s chair as well, I’ve experienced first-hand the frustration of being misinterpreted, and the absence of information on what tools or techniques I could use to articulate my notes more effectively (I’ve yet to see similar how-to’s online).

Since directors rarely see other directors at work there is little cross-pollination of ideas, workflows and experiences among them. If it’s hard to articulate subtle notes on a moving visual effect, it’s even harder without having seen others do it successfully first. Therefore, I consider myself lucky to get to see other directors/clients do it, and learn from their successes and failures alike.

With this post I hope to share some of my insights from working with numerous clients and point out several ways to give notes on visual effects. Even if none of this is new to you, going through this might remind you of a tool you’ve been ignoring! Either way, I intend to keep updating this over time so feel free to suggest additions. Hopefully this can help clients and vendors save a lot of frustration!

Here are ways to give notes on visual effects more effectively:

Listen to your vendor

Sounds basic, but some clients forget that their vendor probably has more experience communicating with clients then they do communicating with vendors. This doesn’t mean that vendors are always right, or that they know better than you what’s good for your project – not at all. But when it comes purely to communication, your vendor likely has seen other clients struggle the same way you do, and may have valuable suggestions and best practices for communicating notes.

Only one point-person gives notes

Here’s an example of a simple way to give a note:

It’s common for a client to consult their peers when giving notes, whether it’s a producer, studio exec, production-designer, DOP etc. When doing that, however, It’s usually better to keep those conversations internal, and present the vendor with final decisions in a definitive way, by the pre-assigned point person (i.e. the director). For vendors, receiving notes from multiple entities simultaneously can be incredibly confusing and frustrating:

You can see how receiving different notes from different sources about the same task might send your vendor on a goose-chase for a clear direction, causing delay and inefficiency. If a single point person is not established, and notes arrive from multiple people, the vendor can’t trust any specific feedback, and might hold work until a firm direction is given.

Distinguish discussions from directions

It can be useful to have the vendor weigh-in on creative discussions, whether in a creative meeting in person, or an electronic discussion. That said, it’s extremely important to distinguish creative discussions from client notes.

Otherwise, things quickly become murky and confusing. Imagine a scenario where instead of giving your vendor decisive notes, you invite them to a shared Google presentation, where multiple people on your team are each writing their thoughts:

As you can see, it’s really hard for a vendor to deduce such a document into a task list:

  • notes contradict each-other.
  • note pending a team-member’s response.
  • New reference added, but is it approved?
  • Unclear reference
  • Not clear who should respond

Generally, if you’ve had an in-depth creative meeting with your vendor early on, their attendance in another “creative brainstorming” meeting isn’t crucial, and it’s best to present them with decisions, or have the point-person consult with them separately (Unless you decide to scrap everything and start from scratch).

When in doubt, ask for clarification

During the process of creating visual effects, a vendor might ask for your opinion on something, without explaining what it is or how far along it is in the process. As a clients you are probably eager to see results, and are likely to assume you’re being shown a completed shot – even if it’s not the case. By doing that though, you might be giving notes on things that the vendor hasn’t even touched yet, instead of focusing on what is relevant at that stage.

It’s really your vendor’s responsibility to indicate what’s being sent: “Preview shot: colors not final”, “Final shot before polishes”, etc, but since you’re both in it together, When in doubt – ask for clarification!

E-mail is your friend (If you use it properly)

Writing notes and followups in one continuous e-mail thread (Gmail does that automatically), offers an efficient way to keep track and review the process. Make sure the “Subject” field properly describes what most notes are for:

Strategize your e-mail threads: don’t cram too much into a single e-mail thread (try to keep it task-specific), while at the same time avoid having too many active e-mail threads simultaneously:

Keeping e-mail threads subject-specific is a collaborative effort, as even an innocent mishap can derail a conversation off point or split the thread and cause a mess. Extra care and a conscious effort of keeping things neat and organized will go a long way in maintaining productive dialog and keeping your vendor focused and happy.

E-mail isn’t your only friend

While e-mail is most clients’ default/favorite communication tool, don’t forget you can also call or meet in person. Direct interactions help remove perceptual gaps and create a common vocabulary, they save time and money, and can be fun!

 Just make sure you keep the meeting quick and efficient.

Just make sure you keep the meeting quick and efficient.

When meeting in person isn’t physically possible, consider video-conferencing using Skype, which allows screen-sharing: A very powerful tool to discuss moving visuals.

It’s worth looking into additional tools such as “cineSync” and “frankie”, which offer great review and annotation tools, with perfect visual fidelity and sync.

Consider using other, less obvious tools, that can be good for communication and collaboration. For instance, I sometimes share Google Presentations with clients – allowing them to comment on images, add arrows, circles and notes, and throw in references and links, all on one shared space that is accessible from anywhere.

Show, don’t tell

Since e-mail is the most common communication tool with various vendors, clients tend to automatically type their feedback in textual form. This may be sufficient in certain situation, but it’s good practice to explore other ways of articulating a note more precisely – you’ll often find that it requires showing, rather than telling.

vs:

In this example, the client wants the vendor to close-in on a certain part of the ship. Articulating his desire in words text can only achieve a limited level of precision, while drawing a frame on top of a recent render, provides a guide that can’t be misinterpreted.

The same applies to movement:

vs:

 I used an Animated GIF as reference, but you can record yourself performing as well if you can't find the right reference.

I used an Animated GIF as reference, but you can record yourself performing as well if you can’t find the right reference.

Again, words lack the specificity that a video reference enables. Nowadays recording a video on the phone and attaching it to an e-mail takes 20 seconds at most and can save 2-3 days of iterations, but more importantly guarantees you get exactly what you need. Furthermore, your vendor will appreciate you more for taking the time and providing articulate notes, showing that you care about the project and their ability to deliver quality material.

A cool tool that one of my clients used and I’ve adopted myself, is video-screen-capture. Apple users can use QuickTime Pro to record their screen while playing back shots, pausing, pointing at key areas and relaying their notes.  PC users can use a tool called OBS (Open Broadcaster Software), which does the same thing.

Create a vocabulary.

Experienced clients should use any form of artistic expression possible to communicate their vision. These can be paintings, poems, dance numbers, songs, films, etc. The bigger your collection of audio-visual references is, the easier it is to articulate a specific style to your product. That said, make sure your vendor doesn’t drown under piles of references, by keeping it organized and task-specific.

To make this collection an even more powerful tool, label these references in such a way that you can more easily refer back to them.

You can use proper definitions, or make up an entirely new terminology – all that matters is that everyone is on the same page and when you use a term like “Edged Swirl”, you and your vendor both have the same visual in mind.

In conclusion

The more effective we are at directing our vendors, the faster and more satisfying the process should be for everyone. Furthermore, with the world of entertainment evolving as rapidly as it is (virtual reality and augmented reality are entering the markets as we speak) our ability to direct highly-technical vendors efficiently may be crucial to our future in the field.

Please feel free to add your thoughts and ideas in the comments below, or contact me directly through this website.

As any film professional knows by now, visual effects have gradually become cheaper to create. This is due to many factors. Increasing demand from everywhere on the film-production spectrum lead to a rapid rise in numbers of VFX companies around the globe, including countries with low labor-costs. Many other territories offer tax incentives to lure productions in, and technological breakthroughs allow VFX to be created faster and by fewer artists.

Alas, while visual effects have definitely gotten cheaper, producers are quite often presented with bids that far exceed their available budget, and are forced to revise the film or make painful compromises in quality. Meanwhile, more and more VFX vendors are forced to work with shrunk budgets, short deadlines and frustrated clients hit in the face by reality.

The obvious reason for this is poor planning and budgeting. But still, time and time again, producers are thrust hastily into production, with a forced self-conviction that a VFX company will ultimately save the day. A widespread perception of the VFX process as an impenetrable “Black Box” only strengthens this conviction, as well as illusive tales of VFX-heavy productions that were saved last minute, on the cheap!

The fact is, while the process of making VFX can be complicated and convoluted at times, it is not as difficult to comprehend as some professionals lead their clients to believe. One might find some twisted logic in keeping producers misinformed and weary of the VFX process. But in reality, knowledgeable and confident producers are more likely to deliver well-constructed shot elements, making the VFX process more efficient and the final result better AND cheaper.

That said, the best advice for a producer, hoping to keep VFX costs low and enjoy a fruitful collaboration, is to lock-in a VFX company as early in pre-production as possible, and plan ahead together. Having a VFX supervisor on location scouts, looking out for invisible pitfalls and suggesting efficient workarounds is invaluable. Planning sequences, drawing storyboards, pre-visualizing together, are all ways to ensure getting the most bang for a buck – but more importantly, to empower the filmmakers to go bolder and more ambitious than they might have otherwise.

Producers often hold-off on contracting a VFX company until after principal photography, because they’d rather have fewer moving parts to deal with in production. That’s backwards thinking: there’s nothing more frustrating than learning you could have saved thousands of dollars by turning the camera 15 degrees left, or learning that an element you spent two days and thousands of dollars capturing on camera, could have been added in post in two hours. You’d be surprised by how often mistakes like this happen – and they almost always can be avoided by consulting a VFX supervisor ahead of time, and having one present while shooting.

But there’s an even better way to save on VFX cost: hiring a director with hands-on experience in VFX production, who can actually DO some of the VFX him/herself. The advantages here are manifold. As motivated and passionate as the VFX vendor can get, a director will usually be much more likely to go above and beyond for his own project. A “VFX director” will usually require less time to communicate his vision to a VFX crew, and no time, if he is creating the effects himself. Of course, having VFX experience should never be the only qualification in a director, but good directors with VFX experience are becoming easier to find – due to the increased accessibility of VFX tools and technology.

Finally, if I had to put my finger on the number one reason productions spend more than they should on VFX – it would be lack of communication on the VFX company’s part, and resistance to new knowledge on the producers’ part. I know producing is tedious and stressful, but it also offers constant stimulation, ever-evolving challenges and never-ending opportunities to learn and grow. The same applies to VFX creation. Ironically, the more fun everyone has with it, the cheaper and more efficient it becomes.