In part one of this post I explored workflow challenges resulting from how conceptually different 360-video VR is from everything I’ve worked on before in my career as a VFX supervisor. In this part I’ll be diving into technical challenges resulting from requirements and limitations unique to 360-video VR, which I’ve experienced while working on a 360-video web-series for YouTube.
(Part one includes a clarification on 360-video versus VR, and how this article focuses on 360-video production).
While trying to keep this short and digestible (And admittedly a bit provocative) I might have generalized and omitted reservations while sounding somewhat authoritative. This article represents my opinion and limited personal experience and knowledge – if you find any inaccuracies please mention them in the comment section of this blog.
WHO NEEDS 8K RESOLUTION
360 Video is a strange beast when it comes to resolutions. While it’s a common argument that 4K video doesn’t offer a drastically improved viewing experience over HD video, a 360 video presents the viewer with a very limited portion of the entire video that was captured. So when a 360 video is shot in 4K resolution, the viewer only sees about 480p resolution, which is about a quarter of an HD image. In order to experience an HD-level sharpness when viewing a 360 video, the resolution of the entire surrounding video must be around 8K or even more.
In our case, YouTube wanted to use our videos to test their new 8K streaming capabilities, so we had to work at a resolution so high that even our high-end workstations couldn’t playback in realtime. Below you can see a final frame in its entirety – shrunk to a fraction of its original size.
Here’s a portion of the shot in 1/1 scale:
While the huge-resolution demands of 360 videos are a major barrier to be reckoned with (I’ll get to them soon), there were other technical challenges we were facing straight off the bat.
WHERE ARE ALL THE PLUGINS
Most VFX supervisors who work in small to medium-sized VFX studios rely solely on off-the-shelf software and plug-ins. They save time by offering shortcuts and automations, and provide solutions to known problems.
The growing abundance of visual effects tools has made us somewhat spoiled and lead us to expect to find a tool for any need. Since VR is so new, publicly-available tools that cater to specific needs of VR are still scarce. The few that have come out, like Skybox Studio, suffer from first generation limitations and bugs.
As a result there are many key components missing from a proper VR workflow:
“Immersive” preview in VR. The ability to see what we’re working on is fundamental, just like a painter must see the canvas in order to paint on it. Similarly every VFX application have a monitor window which displays the final output. Even if we can’t always adjust the content in real-time, we do generally work on the “output canvas” – meaning we can preview how the audience will see our work. In 360 VR the final output is viewed via headset, yet the production workflow is still on a traditional computer screen.
The closest thing I’ve seen to “editing VR in VR” is the following demonstration by Unreal engine (But this is only for game-engine VR, not for video-360).
Until that changes, we are forced to use computer monitors, and run intermediate previews in our VR headsets. This will likely be solved soon enough, and some plug-ins like Skybox Studio let us place a virtual camera inside the 360 sphere and see its point-of-view, which is pretty good for the time being.
Client review in VR. In a previous post I offered some tips on giving notes on VFX effectively, using a variety of existing tools like Cinesync, Shotgun Review and others. Even with these tools in existence many clients stick to writing notes in E-mail, which often leads to confusion and misinterpretations and can cause unnecessary revisions.
If vague notes like “The tree on the left should be bigger” are frustrating, imagine similar notes given to a VR scene, where there is no consistent frame to rely on and “left” is not contextualized. Furthermore, when sending a VR sequence to review, you can’t ensure the client even sees all important elements unless you lock the viewer’s orientation, preventing the client from looking elsewhere.
Implementing a new vocabulary for screen-directions in VR can help, but it would be even better to have a client review tool inside a VR headset – letting clients review 360-videos and insert notes directly using motion controllers.
The closest thing I’ve seen which demonstration this capability is this Fabric-Engine demo (Also limited to game-engine or “desktop” VR, but shows the functionality I’m talking about)
While there are several online 360-video players (Like YouTube) that allow clients to review your work, I haven’t yet seen a review app that will let clients insert notes directly into a 360 video while wearing a headset. Until then, the best solution is probably using a combination of a 360 video web-viewer, and a 2D video review tool like Shotgun’s review tool.
Stereoscopic editing in VR. Many VFX studios have garnered experience working on stereoscopic effects, but since stereoscopic video hasn’t managed to dominate the home-entertainment market, most VFX houses who don’t work on feature films haven’t upgraded their workflow to support stereoscopic VFX.
By reintroducing stereoscopic imagery, VR is blessing us with yet another layer of complexity and dependence on tools that haven’t fully matured yet.
Interactivity Authoring for VR. Because of how personalized the viewing experience is in VR, new opportunities open up for storytellers to explore, namely the ability to modify the experience in real-time based on the user’s actions. For instance, delaying certain events until the viewer turns to view them. This is especially effective in horror-themed VR experiences, where you want things to “jump around a corner” just as viewers turn to that corner.
Even though it has more to do with programming and editing than visual effects, it can still be requested of you and you may chose to accept the responsibility. Such interactivity is relatively common in computer-generated real-time VR experiences, but rarely exist in 360-video. This is because the few authoring tools that do exist, such as the “Unity 3D” game-engine, are geared towards real-time VR applications rather than high-resolution 360 video. While very capable, they currently only offer ground-level functionalities that require further development.
Even if the task of creating interactivity ends up being someone else’s responsibility, creating VFX for an interactive experience still offers some challenges. Events triggered by the viewer often call for previous events to loop indefinitely, creating the need for “loop-able” effects – something relatively uncommon in traditional VFX.
WHY DO OUR SYSTEMS KEEP CRASHING
Specialized tools are always nice to have, but when it comes to the most basic tools, they are so engraved into our process that we often forget they exist – until they break.
Most of our trusted tools were initially developed for Standard resolution, then upgraded to HD resolutions, and then to 4K and 8K resolutions. While most modern applications do support high resolutions, there are still a lot of issues and bugs that appear when we load an 8K footage into an HD-minded workflow.
Video Codecs break. In our project, we used a 360-video camera rig called GoPro Odyssey, which uses 16 GoPro cameras to capture a stereoscopic panoramic video at up to 8K resolution. This video then gets compressed into an MPEG-4 file with a bitrate of 600 Mbit/s. At some point black patches started appearing at the bottom of random frames when we played these in Adobe Premiere or Adobe After Effects. These artifacts were inconsistent and would sometimes disappear from some frames and appear on others. We didn’t have time to deal with this problem and ended up manually re-rendering problematic frames. I assume it was a decoding issue related to reaching a buffer limit before the entire frame finished processing, but it’s only a guess. The bottom line is, after years of relying on a codec to the point of forgetting it even existed, it suddenly broke – and caused the need for some manual clean-up we had no way of anticipating.
Nonlinear 6K editing cause application crashes. Layering videos on top of eachother is an essential part of video editing and especially video compositing. It’s so integral to the VFX workflow that we expect to be able to layer any media we work on. But when working in huge resolutions adding a second 6K video on top of the first 6K video can cause a software to freeze-up or even crash. Editors often work on proxy files to get real-time playback, but VFX artists are often forced to work “on-line”, on the final resolution. One way to speed up the workflow and avoid crashes is to work in “patches” and then combine them into the full-frame once all heavy lifting is finished. But when a visual effect wraps around the entire sphere this solution might not be possible, in which case using proxy files or committing to lower resolutions may be inevitable at this point.
Multitasking can freeze the system. Remember a time where you could only work on one 3D software at a time, and loading another software would crash the system? Well these times are back. Even software designed for a parallel workflow like Adobe’s “dynamic link” seemed to lose its stability when working on 6K files. We ended up having to stick to one software at a time throughout the production to avoid system crashes.
New “render tasks” emerge. Obviously the higher the resolution the longer renders become, but there are certain processes that are so brief we normally wouldn’t factor them in as “render tasks”. Things like combining image sequences into compressed video files or generating half-resolution previews are fast enough for users to run on their local system and wait until they’re done to resume work. In 6K resolution or more even those quick renders that are usually done locally may slow down to the point of becoming “render tasks.” Adding another step to a workflow that is already much slower than we’re used to.
Big files might choke file servers. We had several editors and VFX artists working directly off a SAN network storage, which at certain points suffered from atypical slowdowns.This reduced our systems’ response rates and cost us entire days of work.
Big files take long to transfer. Whether it’s via internet or a shuttle drive – copying multiple video files weighing over 100GB each, will take significantly longer than regular projects. Compressing and extracting files also may take unusually longer, and any compression or file-transfer errors can cause further delays.
Files aren’t playable in realtime. As briefly mentioned before, no matter how compressed, we weren’t able to play any 6K resolution footage smoothly on our systems. And until the hardware catches up, I suppose this is a limitation that will remain a hurdle.
Reliable tools become unreliable. Even heavily relied-on tools for platform-agnostic processes like fluid simulation, fur dynamics, particle behaviour, and various post-processes like radial blur, might hit processing thresholds you didn’t know of when introduced to the high-resolution demands of VR production.
Beyond these few examples, many other breaking points are awaiting on future projects. Until both hardware and software go through extensive stress-tests and stronger systems arrive on the market, everyone working in post production is underpowered when working in VR.
WHY IS THE ENTIRE PIPELINE UNDERPOWERED
VFX supervisors know that defining clear delivery specs (both to and from other post production peers) helps streamline work and save time. A proper workflow allocates time to test media conversion and transfer methods, to ensure they fit the bill before production runs into a deadline and a large number of deliverables must be handed over.
Given the technical stress that VR production puts on the entire pipeline (on the VFX workflow as well as the editing and color workflows) your peers will likely be struggling with similar technical hurdles and slow-downs, and might not be able to conform to your technical needs in a timely fashion. Things you ask for might arrive late and in the wrong formats, and asking an overwhelmed editor for new exports might be futile.
Similarly, as you are rendering and transferring your final shots back to the editor, time restrictions could prevent you from making necessary adjustments on time, causing frustration on the receiving end as well.
WHY DOES OUR WORK SUDDENLY SUCK
When so many things slow down simultaneously, the workflow gets disrupted as certain processes must be truncated to compensate for time lost. You may realize that for a task that normally requires at least three adjustments to get “right”, you only have time to do one. In our case this meant fewer revision cycles, which forced us to stick to simple designs and rely on basic tools we would normally use only as last-minute escape routes.
CONCLUSION: WE ARE PIONEERS
This post might read as a warning sign, but I see it more as a reality-check for anyone expecting working for VR to be a walk in the park. In a way I wrote this post for my past-self, the one who had underestimated the challenges of VR, suffered the consequences and lived to tell the tale. But truth be told, I remember a time when every VFX job was this hard and complicated. A time when tools would break left and right, and technical troubleshooting often took longer than producing art. The art of computer-generated imagery isn’t that old, and many of us who do it today have done it since it was just as new and untested as VR is today. So this is no time to moan and groan about our tools not being fit for the job. It is time to embrace the challenges and overcome them. Create new experiences which will hopefully be as inspiring to others as the ones that have inspired us in becoming VFX artists.