In part one of this post I explored workflow challenges resulting from how conceptually different 360-video VR is from everything I’ve worked on before in my career as a VFX supervisor. In this part I’ll be diving into technical challenges resulting from requirements and limitations unique to 360-video VR, which I’ve experienced while working on a 360-video web-series for YouTube.

(Part one includes a clarification on 360-video versus VR, and how this article focuses on 360-video production).

While trying to keep this short and digestible (And admittedly a bit provocative) I might have generalized and omitted reservations while sounding somewhat authoritative. This article represents my opinion and limited personal experience and knowledge – if you find any inaccuracies please mention them in the comment section of this blog.

WHO NEEDS 8K RESOLUTION

360 Video is a strange beast when it comes to resolutions. While it’s a common argument that 4K video doesn’t offer a drastically improved viewing experience over HD video, a 360 video presents the viewer with a very limited portion of the entire video that was captured. So when a 360 video is shot in 4K resolution, the viewer only sees about 480p resolution, which is about a quarter of an HD image. In order to experience an HD-level sharpness when viewing a 360 video, the resolution of the entire surrounding video must be around 8K or even more.

In our case, YouTube wanted to use our videos to test their new 8K streaming capabilities, so we had to work at a resolution so high that even our high-end workstations couldn’t playback in realtime. Below you can see a final frame in its entirety – shrunk to a fraction of its original size.

Here’s a portion of the shot in 1/1 scale:

While the huge-resolution demands of 360 videos are a major barrier to be reckoned with (I’ll get to them soon), there were other technical challenges we were facing straight off the bat.

WHERE ARE ALL THE PLUGINS

Most VFX supervisors who work in small to medium-sized VFX studios rely solely on off-the-shelf software and plug-ins. They save time by offering shortcuts and automations, and provide solutions to known problems.

The growing abundance of visual effects tools has made us somewhat spoiled and lead us to expect to find a tool for any need. Since VR is so new, publicly-available tools that cater to specific needs of VR are still scarce. The few that have come out, like Skybox Studio, suffer from first generation limitations and bugs.

As a result there are many key components missing from a proper VR workflow:

“Immersive” preview in VR. The ability to see what we’re working on is fundamental, just like a painter must see the canvas in order to paint on it. Similarly every VFX application have a monitor window which displays the final output. Even if we can’t always adjust the content in real-time, we do generally work on the “output canvas” – meaning we can preview how the audience will see our work. In 360 VR the final output is viewed via headset, yet the production workflow is still on a traditional computer screen.

The closest thing I’ve seen to “editing VR in VR” is the following demonstration by Unreal engine (But this is only for game-engine VR, not for video-360).

Until that changes, we are forced to use computer monitors, and run intermediate previews in our VR headsets. This will likely be solved soon enough, and some plug-ins like Skybox Studio let us place a virtual camera inside the 360 sphere and see its point-of-view, which is pretty good for the time being.

Client review in VR. In a previous post I offered some tips on giving notes on VFX effectively, using a variety of existing tools like Cinesync, Shotgun Review and others. Even with these tools in existence many clients stick to writing notes in E-mail, which often leads to confusion and misinterpretations and can cause unnecessary revisions.
If vague notes like “The tree on the left should be bigger” are frustrating, imagine similar notes given to a VR scene, where there is no consistent frame to rely on and “left” is not contextualized. Furthermore, when sending a VR sequence to review, you can’t ensure the client even sees all important elements unless you lock the viewer’s orientation, preventing the client from looking elsewhere.

Implementing a new vocabulary for screen-directions in VR can help, but it would be even better to have a client review tool inside a VR headset – letting clients review 360-videos and insert notes directly using motion controllers.

The closest thing I’ve seen which demonstration this capability is this Fabric-Engine demo (Also limited to game-engine or “desktop” VR, but shows the functionality I’m talking about)

While there are several online 360-video players (Like YouTube) that allow clients to review your work, I haven’t yet seen a review app that will let clients insert notes directly into a 360 video while wearing a headset. Until then, the best solution is probably using a combination of a 360 video web-viewer, and a 2D video review tool like Shotgun’s review tool.

Stereoscopic editing in VR. Many VFX studios have garnered experience working on stereoscopic effects, but since stereoscopic video hasn’t managed to dominate the home-entertainment market, most VFX houses who don’t work on feature films haven’t upgraded their workflow to support stereoscopic VFX.

By reintroducing stereoscopic imagery, VR is blessing us with yet another layer of complexity and dependence on tools that haven’t fully matured yet.

Interactivity Authoring for VR. Because of how personalized the viewing experience is in VR, new opportunities open up for storytellers to explore, namely the ability to modify the experience in real-time based on the user’s actions. For instance, delaying certain events until the viewer turns to view them. This is especially effective in horror-themed VR experiences, where you want things to “jump around a corner” just as viewers turn to that corner.

Even though it has more to do with programming and editing than visual effects, it can still be requested of you and you may chose to accept the responsibility. Such interactivity is relatively common in computer-generated real-time VR experiences, but rarely exist in 360-video. This is because the few authoring tools that do exist, such as the “Unity 3D” game-engine, are geared towards real-time VR applications rather than high-resolution 360 video. While very capable, they currently only offer ground-level functionalities that require further development.
Even if the task of creating interactivity ends up being someone else’s responsibility, creating VFX for an interactive experience still offers some challenges. Events triggered by the viewer often call for previous events to loop indefinitely, creating the need for “loop-able” effects – something relatively uncommon in traditional VFX.

WHY DO OUR SYSTEMS KEEP CRASHING

Specialized tools are always nice to have, but when it comes to the most basic tools, they are so engraved into our process that we often forget they exist – until they break.

Most of our trusted tools were initially developed for Standard resolution, then upgraded to HD resolutions, and then to 4K and 8K resolutions. While most modern applications do support high resolutions, there are still a lot of issues and bugs that appear when we load an 8K footage into an HD-minded workflow.

Video Codecs break. In our project, we used a 360-video camera rig called GoPro Odyssey, which uses 16 GoPro cameras to capture a stereoscopic panoramic video at up to 8K resolution. This video then gets compressed into an MPEG-4 file with a bitrate of 600 Mbit/s. At some point black patches started appearing at the bottom of random frames when we played these in Adobe Premiere or Adobe After Effects. These artifacts were inconsistent and would sometimes disappear from some frames and appear on others. We didn’t have time to deal with this problem and ended up manually re-rendering problematic frames. I assume it was a decoding issue related to reaching a buffer limit before the entire frame finished processing, but it’s only a guess. The bottom line is, after years of relying on a codec to the point of forgetting it even existed, it suddenly broke – and caused the need for some manual clean-up we had no way of anticipating.

Nonlinear 6K editing cause application crashes. Layering videos on top of eachother is an essential part of video editing and especially video compositing. It’s so integral to the VFX workflow that we expect to be able to layer any media we work on. But when working in huge resolutions adding a second 6K video on top of the first 6K video can cause a software to freeze-up or even crash. Editors often work on proxy files to get real-time playback, but VFX artists are often forced to work “on-line”, on the final resolution. One way to speed up the workflow and avoid crashes is to work in “patches” and then combine them into the full-frame once all heavy lifting is finished. But when a visual effect wraps around the entire sphere this solution might not be possible, in which case using proxy files or committing to lower resolutions may be inevitable at this point.

Multitasking can freeze the system. Remember a time where you could only work on one 3D software at a time, and loading another software would crash the system? Well these times are back. Even software designed for a parallel workflow like Adobe’s “dynamic link” seemed to lose its stability when working on 6K files. We ended up having to stick to one software at a time throughout the production to avoid system crashes.

New “render tasks” emerge. Obviously the higher the resolution the longer renders become, but there are certain processes that are so brief we normally wouldn’t factor them in as “render tasks”. Things like combining image sequences into compressed video files or generating half-resolution previews are fast enough for users to run on their local system and wait until they’re done to resume work. In 6K resolution or more even those quick renders that are usually done locally may slow down to the point of becoming “render tasks.” Adding another step to a workflow that is already much slower than we’re used to.

Big files might choke file servers. We had several editors and VFX artists working directly off a SAN network storage, which at certain points suffered from atypical slowdowns.This reduced our systems’ response rates and cost us entire days of work.

Big files take long to transfer. Whether it’s via internet or a shuttle drive – copying multiple video files weighing over 100GB each, will take significantly longer than regular projects. Compressing and extracting files also may take unusually longer, and any compression or file-transfer errors can cause further delays.

Files aren’t playable in realtime. As briefly mentioned before, no matter how compressed, we weren’t able to play any 6K resolution footage smoothly on our systems. And until the hardware catches up, I suppose this is a limitation that will remain a hurdle.

Reliable tools become unreliable. Even heavily relied-on tools for platform-agnostic processes like fluid simulation, fur dynamics, particle behaviour, and various post-processes like radial blur, might hit processing thresholds you didn’t know of when introduced to the high-resolution demands of VR production.

Beyond these few examples, many other breaking points are awaiting on future projects. Until both hardware and software go through extensive stress-tests and stronger systems arrive on the market, everyone working in post production is underpowered when working in VR.

WHY IS THE ENTIRE PIPELINE UNDERPOWERED

VFX supervisors know that defining clear delivery specs (both to and from other post production peers) helps streamline work and save time. A proper workflow allocates time to test media conversion and transfer methods, to ensure they fit the bill before production runs into a deadline and a large number of deliverables must be handed over.

Given the technical stress that VR production puts on the entire pipeline (on the VFX workflow as well as the editing and color workflows) your peers will likely be struggling with similar technical hurdles and slow-downs, and might not be able to conform to your technical needs in a timely fashion. Things you ask for might arrive late and in the wrong formats, and asking an overwhelmed editor for new exports might be futile.

Similarly, as you are rendering and transferring your final shots back to the editor, time restrictions could prevent you from making necessary adjustments on time, causing frustration on the receiving end as well.

WHY DOES OUR WORK SUDDENLY SUCK

When so many things slow down simultaneously, the workflow gets disrupted as certain processes must be truncated to compensate for time lost. You may realize that for a task that normally requires at least three adjustments to get “right”, you only have time to do one. In our case this meant fewer revision cycles, which forced us to stick to simple designs and rely on basic tools we would normally use only as last-minute escape routes.

CONCLUSION: WE ARE PIONEERS

This post might read as a warning sign, but I see it more as a reality-check for anyone expecting working for VR to be a walk in the park. In a way I wrote this post for my past-self, the one who had underestimated the challenges of VR, suffered the consequences and lived to tell the tale. But truth be told, I remember a time when every VFX job was this hard and complicated. A time when tools would break left and right, and technical troubleshooting often took longer than producing art. The art of computer-generated imagery isn’t that old, and many of us who do it today have done it since it was just as new and untested as VR is today. So this is no time to moan and groan about our tools not being fit for the job. It is time to embrace the challenges and overcome them. Create new experiences which will hopefully be as inspiring to others as the ones that have inspired us in becoming VFX artists.

<- Go back to part one

As we near the end of 2016, current generation Virtual Reality remains a promising new medium that is far from reaching its potential. Whether it succeeds or fails in fulfilling its promise, many people in the visual effects community are racing to cater to the demands of 360-video virtual reality production.

For me personally, virtual reality is much more than a business opportunity. It excites the explorer in me by providing new uncharted territories and many unsolved challenges. So when Outpost|VFX was approached by YouTube to do VFX for several stereo-360 videos, I was immediately intrigued and eager to rise to the challenge.

Short turn-around time and a tight budget made it even more challenging, but didn’t deter me from signing up for the task. Though we ultimately succeeded and delivered all the visual effects to the client’s satisfaction, the road was full of obstacles that we didn’t anticipate. Over time these obstacles will inevitably be removed, but for now I hope this post will help newcomers by sharing some of the lessons learned during the process.

Before we begin, a brief note on the terms “360 video” and “virtual reality”:

As demonstrated here, 360-videos use the viewer’s head orientation only to display whichever part of the surrounding the viewer is directed at. The “surrounding” is essentially a flat image warped into a sphere. This allows you to look in any direction while being locked into a fixed point in space.

Virtual Reality experiences use both the viewer’s head orientation and position to place the viewer in a video-game-like virtual world. This allows more freedom of movement than 360 video, but content is currently limited to real-time CGI due to limitations of video-capture technology.

This post will discuss visual effects for 360-videos only. However, much like the rest of the entertainment industry, I’ll use the term “VR” as well when referring to 360-videos.

 

WHY IS VR SO CHALLENGING

In many ways 360 video is nothing more than a wide format. A 360-degree image is essentially two 180-degree fish-eye lens images stitched together. So it’s easy to assume that the main challenge for VFX in this medium is matching lens distortions and figure out how to review and give notes on effects in 360-degrees.

In reality, the differences between 360-degree and traditional video are so vast that they affect every step of the process in more ways than one. Entering this new realm often requires leaving behind our most trusted tools and fail-safe devices. Our ability to rely on past experience is greatly reduced and even the simplest VFX task can suddenly turn into a nightmare. It might sound like an exaggeration. Is it really that different? -well obviously it depends on the specific needs of your project, but here are a few things to keep in mind:

YOU’VE GOT NOWHERE TO “HIDE”

If you’ve been in the game long enough you know that VFX supervisors rely on a variety of “tricks” that save time and money. Many of them don’t apply to 360 videos:

Editing is usually VFX’s best friend. As this GIF demonstrates, efficient use of cut-aways can hide certain actions that are hard to create or imagine (like a monster grabbing the kid by the head and lifting him off the ground) while adding more value to the scene. Cuts in VR can cause disorientation if they’re not carefully pre-planned, which means they can’t be relied on as an escape pod late in the game. Instead, VR 360 videos tend to incorporate long uninterrupted takes that call for long uninterrupted VFX. Keep that in mind when calculating render-times.

 

 Framing as used in

Framing is another trusted support-beam for VFX artists. Traditionally the director decides how a scene will be framed, and can change framing to leave out certain challenging effects, or at least trim parts that are unnecessarily complicated. In the scene above Tarantino famously framed out the cutting of an ear, forcing the audience to imagine it instead – and demonstrated how powerful these decisions can be. By now we are so used to designing VFX for a 16:9 frame that we automatically label them as easy or complicated assuming they’ll be framed a certain way. There is no framing in VR. Well technically there is, but it’s driven by the orientation of the viewer who is free to choose whether to look away or stare directly at an effect. As result, any VFX is potentially fully exposed from birth to death – consider that when calculating simulation times for your next fluid-dynamic shot in VR.

 

Camera movements are often perceived as something VFX supervisors prefer to avoid, as they raise the need for camera-tracking and other processes. In some situations, though, moving the camera (whether physically or in post-production) can help sell an effect. A good example is camera shakes following an explosion: they make the explosion feel bigger by suggesting there’s a shock-wave, and at the same time reduce the visibility of the VFX and allow certain demanding simulations be avoided.

In 360 video camera movements can cause motion sickness-like symptoms also known as “VR-sickness” and therefore aren’t used very often. Even when they are used, camera movements in VR can direct the viewer’s attention away from an effect, but not always take it off-screen completely.

 

Optical Effects such as lens flares, depth of field, motion blur, chromatic aberration, film grain and others are commonly added on top of visual effects to make them blend better and feel photorealistic while masking areas of low detail. These optical effects are so entrenched in our VFX workflow that it’s easy to forget how “naked” or “empty” our effects feel without them.

The absence of a single lens, frame and variable focal-length makes 360 videos extremely crisp and sharp. Lens flares are possible but behave strangely if they aren’t dynamically changing based on the viewing angle, which requires a layer of interactivity that’s not standardized yet. Depth of field and motion blur are largely avoided because they impose optical limitations and disrupt the immersiveness. Image degradation effects like film grain and chromatic aberrations are rarely used as they create a “dirty glass” effect, which is more distracting in VR than in traditional film.

 

Eye Tracking is more of an optical analysis that drives aesthetic decisions than a technical tool. I’m talking about being able to anticipate what the viewers are likely to look at during a scene, and focusing more efforts in those places. This ability is greatly reduced in VR because viewers have a much wider area they can explore visually, and are more prone to missing visual cues designed to grab their attention. This means you are forced to treat every part of the effect as a potential point of focused attention.

 

 

 

 

 

ACTUALLY, YOU’VE GOT PLACES TO HIDE

While this list may seem intimidating, many of these restrictions are easy to predict and address when breaking down the script of a VR film. By being involved in pre-production stages of the project I was supervising, I was able to identify problematic scenes and offer suggestions that simplified and reduced the workload of certain effects.

For instance, the script had a character carry a magical amulet that was supposed to glow with light. Making a practical glow wasn’t possible so it became a VFX task. Had this been a traditional film I’d consider this a simple task that requires brief design work, some tracking and maybe a bit of rotoscoping. I would expect the amulet to make brief appearances in one or two close-up shots, a medium-shot and potentially a couple of long-shots. I’d be able to suggest framing and blocking adjustments for each shot individually to simplify tasks and save costs.

But this being a 360-video, I suspected the scene might be filmed in a continuous take, capturing the entire spherical environment – meaning the action wouldn’t be broken down into sizeable chunks, and we wouldn’t have editing to rely on as a safeguard. This effect would potentially be visible for the entire duration of the scene.
Anticipating this in advance gave us a great advantage. Given our time and budget constraints I recommended limiting the visibility of the glowing amulet either by making it glow intermittently or covering it while not in use. The director decided to have it stashed in a pocket for the majority of the scene – limiting the effect to when it was truly needed.

This saved us a great deal of work that wasn’t crucial for the story. It also served as an important thought-exercise that made the entire crew more mindful of similar pitfalls and on the look-out for similar opportunities, which brings up another challenge of working in this medium:

WHY IS EVERYONE SO INEXPERIENCED

Creating compelling Virtual Reality experiences has been attempted several times in the last few decades, but it wasn’t until now that the entertainment industry started taking it more seriously. Even established filmmakers are making their first footprints in virtual reality, and there aren’t many noteable VR films for inspiration. This makes every VR creator somewhat of an experimental filmmaker, working with a high risk of failure or undesired outcomes.
Whether done at your fault or against your advice, fixing VR-related issues could become your responsibility, adding new challenges to your work.

Here are examples for mishaps that may originate from lack of experience in VR storytelling:

Failing to direct viewers’ focus. 360 Video VR lets viewers look around freely. This freedom introduces the risk of missing out on key information. Guiding viewers’ focus through a 360-degree panoramic view is much harder than through a traditional screen. With tools like framing and editing either gone or severely limited, directors that never faced this challenge might fail to recognize the importance of preparation and pre-visualization. Proper workflow demands thorough planning of actors’ positions and movements, careful timing, visual composition that dissuades viewers from looking away, and various other approaches. Giving in to the temptation of positioning parallel action around the 360 camera can cause confusion and frustration for viewers as they can only look in one direction at a time.
Because live-previewing is not yet widely available, such mistakes might remain unnoticed until after the shoot is over when the director gets to review footage in a VR headset. By then it might be too late to re-shoot the footage, turning the problem over to post-production. Certain things can be fixed in post-production, by manipulating the timing of certain events, adding CG elements to grab viewers’ attention and aim them in the right direction, etc. But such fixes shouldn’t to be taken lightly as they are equally affected by the heightened complexity the VR medium introduces. Finally, in the case of an unfixable mistake, you could end up creating an effect knowing that a large chunk of viewers might miss it by looking elsewhere.

Remaining too stationary. I mentioned earlier that VR tends to incorporate long uninterrupted shots, and that camera movements tend to be avoided. This is mainly because early attempts to move cameras in VR caused nausea, and jump-cuts caused disorientation.
With the advancement of VR headsets, growing understanding of the causes of VR-sickness, and understanding of orientation cues, these side-effects can now be avoided without much compromise. This requires a bit of testing and prototyping, but the benefits are great compared to the limitations of a stationary camera and inability to cut. Needless to say, reintroducing cuts can help shrink the length of VFX shots, and moving the camera can further optimize viewing angles and save costs.

Inefficient visual vocabulary. Even experienced directors could find themselves out of their element when working in VR for the first time, and struggle to visualize the project they’re creating. Pre visualization tools are extremely helpful. If they’re not used for any reason, the director’s ability to envision their creation, let alone communicate it outwards, may be hindered.
This could not only disrupt your ability to plan and execute effects efficiently, but force you to backtrack and pre-visualize using final assets, while constrained to footage that’s far from optimal.

Relying on outdated concepts of VFX workflows. Generally speaking, directors who are familiar with VFX workflows are great to work with. In some cases they do your job for you by anticipating and dodging certain VFX traps in the early planning stages of certain scenes. Entering the VR realm alongside such a director can be a great experience of joint discovery. But over-confidence could lead a director to prep and shoot without a VFX supervisor, assuming the same rules that apply to traditional video apply to VR as well. Even if they follow every rule in the rulebook, without having researched and tested various tools and calculated workloads and bottlenecks, they are shooting in the dark. While it’s always best to err on the side of caution when accepting VFX tasks you didn’t on-set supervise, it’s especially true in VR.

Being overly ambitious. For all the reasons mentioned, and especially the ones that I’ll be getting to shortly, creating a VR experience is already an ambitious undertaking that can be surprisingly challenging and mentally taxing. But that’s not going to stop dreamers from dreaming, and you may end up working for a director who believes anything is possible. Therefore it’s important to communicate from the very beginning the compounded complexities involved in creating VFX in VR, and to prepare your collaborators to unexpected limitations that are going to be encountered.

EXPERIENCE WILL BE THRUST UPON YOU

Everyone is fairly new to VR and many of the restrictions and pitfalls are being discovered for the first time. The good news is that lessons are being learned and documented and a growing number of filmmakers are gaining experience. Old perceptions will inevitably be replaced by new ones, and our jobs as filmmakers and VFX creators will become easier as a result.

Beyond conceptual challenges, VR reintroduces a healthy amount of technical challenges as well. Even a fairly simple effect like adding glowing light-rays to an amulet can be quite time-consuming when you’re working in stereoscopic 6K resolution.

Part 2 of this post lists many of the technical challenges facing VFX creators working on VR productions.

Proceed to part two ->