VFX for 360 VR, and why you are not prepared for it (Part 2)

In part one of this post I explored workflow challenges resulting from how conceptually different 360-video VR is from everything I’ve worked on before in my career as a VFX supervisor. In this part I’ll be diving into technical challenges resulting from requirements and limitations unique to 360-video VR, which I’ve experienced while working on a 360-video web-series for YouTube.

(Part one includes a clarification on 360-video versus VR, and how this article focuses on 360-video production).

While trying to keep this short and digestible (And admittedly a bit provocative) I might have generalized and omitted reservations while sounding somewhat authoritative. This article represents my opinion and limited personal experience and knowledge – if you find any inaccuracies please mention them in the comment section of this blog.

WHO NEEDS 8K RESOLUTION

360 Video is a strange beast when it comes to resolutions. While it’s a common argument that 4K video doesn’t offer a drastically improved viewing experience over HD video, a 360 video presents the viewer with a very limited portion of the entire video that was captured. So when a 360 video is shot in 4K resolution, the viewer only sees about 480p resolution, which is about a quarter of an HD image. In order to experience an HD-level sharpness when viewing a 360 video, the resolution of the entire surrounding video must be around 8K or even more.

In our case, YouTube wanted to use our videos to test their new 8K streaming capabilities, so we had to work at a resolution so high that even our high-end workstations couldn’t playback in realtime. Below you can see a final frame in its entirety – shrunk to a fraction of its original size.

Here’s a portion of the shot in 1/1 scale:

While the huge-resolution demands of 360 videos are a major barrier to be reckoned with (I’ll get to them soon), there were other technical challenges we were facing straight off the bat.

WHERE ARE ALL THE PLUGINS

Most VFX supervisors who work in small to medium-sized VFX studios rely solely on off-the-shelf software and plug-ins. They save time by offering shortcuts and automations, and provide solutions to known problems.

The growing abundance of visual effects tools has made us somewhat spoiled and lead us to expect to find a tool for any need. Since VR is so new, publicly-available tools that cater to specific needs of VR are still scarce. The few that have come out, like Skybox Studio, suffer from first generation limitations and bugs.

As a result there are many key components missing from a proper VR workflow:

“Immersive” preview in VR. The ability to see what we’re working on is fundamental, just like a painter must see the canvas in order to paint on it. Similarly every VFX application have a monitor window which displays the final output. Even if we can’t always adjust the content in real-time, we do generally work on the “output canvas” – meaning we can preview how the audience will see our work. In 360 VR the final output is viewed via headset, yet the production workflow is still on a traditional computer screen.

The closest thing I’ve seen to “editing VR in VR” is the following demonstration by Unreal engine (But this is only for game-engine VR, not for video-360).

Until that changes, we are forced to use computer monitors, and run intermediate previews in our VR headsets. This will likely be solved soon enough, and some plug-ins like Skybox Studio let us place a virtual camera inside the 360 sphere and see its point-of-view, which is pretty good for the time being.

 

 

Client review in VR. In a previous post I offered some tips on giving notes on VFX effectively, using a variety of existing tools like Cinesync, Shotgun Review and others. Even with these tools in existence many clients stick to writing notes in E-mail, which often leads to confusion and misinterpretations and can cause unnecessary revisions.
If vague notes like “The tree on the left should be bigger” are frustrating, imagine similar notes given to a VR scene, where there is no consistent frame to rely on and “left” is not contextualized. Furthermore, when sending a VR sequence to review, you can’t ensure the client even sees all important elements unless you lock the viewer’s orientation, preventing the client from looking elsewhere.

Implementing a new vocabulary for screen-directions in VR can help, but it would be even better to have a client review tool inside a VR headset – letting clients review 360-videos and insert notes directly using motion controllers.

The closest thing I’ve seen which demonstration this capability is this Fabric-Engine demo (Also limited to game-engine or “desktop” VR, but shows the functionality I’m talking about)

While there are several online 360-video players (Like YouTube) that allow clients to review your work, I haven’t yet seen a review app that will let clients insert notes directly into a 360 video while wearing a headset. Until then, the best solution is probably using a combination of a 360 video web-viewer, and a 2D video review tool like Shotgun’s review tool.

 

Stereoscopic editing in VR. Many VFX studios have garnered experience working on stereoscopic effects, but since stereoscopic video hasn’t managed to dominate the home-entertainment market, most VFX houses who don’t work on feature films haven’t upgraded their workflow to support stereoscopic VFX.

By reintroducing stereoscopic imagery, VR is blessing us with yet another layer of complexity and dependence on tools that haven’t fully matured yet.

 

Interactivity Authoring for VR. Because of how personalized the viewing experience is in VR, new opportunities open up for storytellers to explore, namely the ability to modify the experience in real-time based on the user’s actions. For instance, delaying certain events until the viewer turns to view them. This is especially effective in horror-themed VR experiences, where you want things to “jump around a corner” just as viewers turn to that corner.

Even though it has more to do with programming and editing than visual effects, it can still be requested of you and you may chose to accept the responsibility. Such interactivity is relatively common in computer-generated real-time VR experiences, but rarely exist in 360-video. This is because the few authoring tools that do exist, such as the “Unity 3D” game-engine, are geared towards real-time VR applications rather than high-resolution 360 video. While very capable, they currently only offer ground-level functionalities that require further development.
Even if the task of creating interactivity ends up being someone else’s responsibility, creating VFX for an interactive experience still offers some challenges. Events triggered by the viewer often call for previous events to loop indefinitely, creating the need for “loop-able” effects – something relatively uncommon in traditional VFX.

WHY DO OUR SYSTEMS KEEP CRASHING

Specialized tools are always nice to have, but when it comes to the most basic tools, they are so engraved into our process that we often forget they exist – until they break.

Most of our trusted tools were initially developed for Standard resolution, then upgraded to HD resolutions, and then to 4K and 8K resolutions. While most modern applications do support high resolutions, there are still a lot of issues and bugs that appear when we load an 8K footage into an HD-minded workflow.

Video Codecs break. In our project, we used a 360-video camera rig called GoPro Odyssey, which uses 16 GoPro cameras to capture a stereoscopic panoramic video at up to 8K resolution. This video then gets compressed into an MPEG-4 file with a bitrate of 600 Mbit/s. At some point black patches started appearing at the bottom of random frames when we played these in Adobe Premiere or Adobe After Effects. These artifacts were inconsistent and would sometimes disappear from some frames and appear on others. We didn’t have time to deal with this problem and ended up manually re-rendering problematic frames. I assume it was a decoding issue related to reaching a buffer limit before the entire frame finished processing, but it’s only a guess. The bottom line is, after years of relying on a codec to the point of forgetting it even existed, it suddenly broke – and caused the need for some manual clean-up we had no way of anticipating.

 

Nonlinear 6K editing cause application crashes. Layering videos on top of eachother is an essential part of video editing and especially video compositing. It’s so integral to the VFX workflow that we expect to be able to layer any media we work on. But when working in huge resolutions adding a second 6K video on top of the first 6K video can cause a software to freeze-up or even crash. Editors often work on proxy files to get real-time playback, but VFX artists are often forced to work “on-line”, on the final resolution. One way to speed up the workflow and avoid crashes is to work in “patches” and then combine them into the full-frame once all heavy lifting is finished. But when a visual effect wraps around the entire sphere this solution might not be possible, in which case using proxy files or committing to lower resolutions may be inevitable at this point.

 

Multitasking can freeze the system. Remember a time where you could only work on one 3D software at a time, and loading another software would crash the system? Well these times are back. Even software designed for a parallel workflow like Adobe’s “dynamic link” seemed to lose its stability when working on 6K files. We ended up having to stick to one software at a time throughout the production to avoid system crashes.

 

New “render tasks” emerge. Obviously the higher the resolution the longer renders become, but there are certain processes that are so brief we normally wouldn’t factor them in as “render tasks”. Things like combining image sequences into compressed video files or generating half-resolution previews are fast enough for users to run on their local system and wait until they’re done to resume work. In 6K resolution or more even those quick renders that are usually done locally may slow down to the point of becoming “render tasks.” Adding another step to a workflow that is already much slower than we’re used to.

 

Big files might choke file servers. We had several editors and VFX artists working directly off a SAN network storage, which at certain points suffered from atypical slowdowns.This reduced our systems’ response rates and cost us entire days of work.
 

 

 

Big files take long to transfer. Whether it’s via internet or a shuttle drive – copying multiple video files weighing over 100GB each, will take significantly longer than regular projects. Compressing and extracting files also may take unusually longer, and any compression or file-transfer errors can cause further delays.

 

 

Files aren’t playable in realtime. As briefly mentioned before, no matter how compressed, we weren’t able to play any 6K resolution footage smoothly on our systems. And until the hardware catches up, I suppose this is a limitation that will remain a hurdle.

 

 

 

 

Reliable tools become unreliable. Even heavily relied-on tools for platform-agnostic processes like fluid simulation, fur dynamics, particle behaviour, and various post-processes like radial blur, might hit processing thresholds you didn’t know of when introduced to the high-resolution demands of VR production.

Beyond these few examples, many other breaking points are awaiting on future projects. Until both hardware and software go through extensive stress-tests and stronger systems arrive on the market, everyone working in post production is underpowered when working in VR.

 

WHY IS THE ENTIRE PIPELINE UNDERPOWERED

VFX supervisors know that defining clear delivery specs (both to and from other post production peers) helps streamline work and save time. A proper workflow allocates time to test media conversion and transfer methods, to ensure they fit the bill before production runs into a deadline and a large number of deliverables must be handed over.

Given the technical stress that VR production puts on the entire pipeline (on the VFX workflow as well as the editing and color workflows) your peers will likely be struggling with similar technical hurdles and slow-downs, and might not be able to conform to your technical needs in a timely fashion. Things you ask for might arrive late and in the wrong formats, and asking an overwhelmed editor for new exports might be futile.

Similarly, as you are rendering and transferring your final shots back to the editor, time restrictions could prevent you from making necessary adjustments on time, causing frustration on the receiving end as well.

WHY DOES OUR WORK SUDDENLY SUCK

When so many things slow down simultaneously, the workflow gets disrupted as certain processes must be truncated to compensate for time lost. You may realize that for a task that normally requires at least three adjustments to get “right”, you only have time to do one. In our case this meant fewer revision cycles, which forced us to stick to simple designs and rely on basic tools we would normally use only as last-minute escape routes.

 

CONCLUSION: WE ARE PIONEERS

This post might read as a warning sign, but I see it more as a reality-check for anyone expecting working for VR to be a walk in the park. In a way I wrote this post for my past-self, the one who had underestimated the challenges of VR, suffered the consequences and lived to tell the tale. But truth be told, I remember a time when every VFX job was this hard and complicated. A time when tools would break left and right, and technical troubleshooting often took longer than producing art. The art of computer-generated imagery isn’t that old, and many of us who do it today have done it since it was just as new and untested as VR is today. So this is no time to moan and groan about our tools not being fit for the job. It is time to embrace the challenges and overcome them. Create new experiences which will hopefully be as inspiring to others as the ones that have inspired us in becoming VFX artists.

<- Go back to part one

VFX for 360 VR, and why you are not prepared for it (Part 1)

As we near the end of 2016, current generation Virtual Reality remains a promising new medium that is far from reaching its potential. Whether it succeeds or fails in fulfilling its promise, many people in the visual effects community are racing to cater to the demands of 360-video virtual reality production.

For me personally, virtual reality is much more than a business opportunity. It excites the explorer in me by providing new uncharted territories and many unsolved challenges. So when Outpost|VFX was approached by YouTube to do VFX for several stereo-360 videos, I was immediately intrigued and eager to rise to the challenge.

Short turn-around time and a tight budget made it even more challenging, but didn’t deter me from signing up for the task. Though we ultimately succeeded and delivered all the visual effects to the client’s satisfaction, the road was full of obstacles that we didn’t anticipate. Over time these obstacles will inevitably be removed, but for now I hope this post will help newcomers by sharing some of the lessons learned during the process.

Before we begin, a brief note on the terms “360 video” and “virtual reality”:

As demonstrated here, 360-videos use the viewer’s head orientation only to display whichever part of the surrounding the viewer is directed at. The “surrounding” is essentially a flat image warped into a sphere. This allows you to look in any direction while being locked into a fixed point in space.

Virtual Reality experiences use both the viewer’s head orientation and position to place the viewer in a video-game-like virtual world. This allows more freedom of movement than 360 video, but content is currently limited to real-time CGI due to limitations of video-capture technology.

This post will discuss visual effects for 360-videos only. However, much like the rest of the entertainment industry, I’ll use the term “VR” as well when referring to 360-videos.

 

WHY IS VR SO CHALLENGING

In many ways 360 video is nothing more than a wide format. A 360-degree image is essentially two 180-degree fish-eye lens images stitched together. So it’s easy to assume that the main challenge for VFX in this medium is matching lens distortions and figure out how to review and give notes on effects in 360-degrees.

In reality, the differences between 360-degree and traditional video are so vast that they affect every step of the process in more ways than one. Entering this new realm often requires leaving behind our most trusted tools and fail-safe devices. Our ability to rely on past experience is greatly reduced and even the simplest VFX task can suddenly turn into a nightmare. It might sound like an exaggeration. Is it really that different? -well obviously it depends on the specific needs of your project, but here are a few things to keep in mind:

YOU’VE GOT NOWHERE TO “HIDE”

If you’ve been in the game long enough you know that VFX supervisors rely on a variety of “tricks” that save time and money. Many of them don’t apply to 360 videos:

Editing is usually VFX’s best friend. As this GIF demonstrates, efficient use of cut-aways can hide certain actions that are hard to create or imagine (like a monster grabbing the kid by the head and lifting him off the ground) while adding more value to the scene. Cuts in VR can cause disorientation if they’re not carefully pre-planned, which means they can’t be relied on as an escape pod late in the game. Instead, VR 360 videos tend to incorporate long uninterrupted takes that call for long uninterrupted VFX. Keep that in mind when calculating render-times.

 

 Framing as used in

Framing is another trusted support-beam for VFX artists. Traditionally the director decides how a scene will be framed, and can change framing to leave out certain challenging effects, or at least trim parts that are unnecessarily complicated. In the scene above Tarantino famously framed out the cutting of an ear, forcing the audience to imagine it instead – and demonstrated how powerful these decisions can be. By now we are so used to designing VFX for a 16:9 frame that we automatically label them as easy or complicated assuming they’ll be framed a certain way. There is no framing in VR. Well technically there is, but it’s driven by the orientation of the viewer who is free to choose whether to look away or stare directly at an effect. As result, any VFX is potentially fully exposed from birth to death – consider that when calculating simulation times for your next fluid-dynamic shot in VR.

 

Camera movements are often perceived as something VFX supervisors prefer to avoid, as they raise the need for camera-tracking and other processes. In some situations, though, moving the camera (whether physically or in post-production) can help sell an effect. A good example is camera shakes following an explosion: they make the explosion feel bigger by suggesting there’s a shock-wave, and at the same time reduce the visibility of the VFX and allow certain demanding simulations be avoided.

In 360 video camera movements can cause motion sickness-like symptoms also known as “VR-sickness” and therefore aren’t used very often. Even when they are used, camera movements in VR can direct the viewer’s attention away from an effect, but not always take it off-screen completely.

 

Optical Effects such as lens flares, depth of field, motion blur, chromatic aberration, film grain and others are commonly added on top of visual effects to make them blend better and feel photorealistic while masking areas of low detail. These optical effects are so entrenched in our VFX workflow that it’s easy to forget how “naked” or “empty” our effects feel without them.

The absence of a single lens, frame and variable focal-length makes 360 videos extremely crisp and sharp. Lens flares are possible but behave strangely if they aren’t dynamically changing based on the viewing angle, which requires a layer of interactivity that’s not standardized yet. Depth of field and motion blur are largely avoided because they impose optical limitations and disrupt the immersiveness. Image degradation effects like film grain and chromatic aberrations are rarely used as they create a “dirty glass” effect, which is more distracting in VR than in traditional film.

 

Eye Tracking is more of an optical analysis that drives aesthetic decisions than a technical tool. I’m talking about being able to anticipate what the viewers are likely to look at during a scene, and focusing more efforts in those places. This ability is greatly reduced in VR because viewers have a much wider area they can explore visually, and are more prone to missing visual cues designed to grab their attention. This means you are forced to treat every part of the effect as a potential point of focused attention.

 

 

 

 

 

ACTUALLY, YOU’VE GOT PLACES TO HIDE

While this list may seem intimidating, many of these restrictions are easy to predict and address when breaking down the script of a VR film. By being involved in pre-production stages of the project I was supervising, I was able to identify problematic scenes and offer suggestions that simplified and reduced the workload of certain effects.

For instance, the script had a character carry a magical amulet that was supposed to glow with light. Making a practical glow wasn’t possible so it became a VFX task. Had this been a traditional film I’d consider this a simple task that requires brief design work, some tracking and maybe a bit of rotoscoping. I would expect the amulet to make brief appearances in one or two close-up shots, a medium-shot and potentially a couple of long-shots. I’d be able to suggest framing and blocking adjustments for each shot individually to simplify tasks and save costs.

But this being a 360-video, I suspected the scene might be filmed in a continuous take, capturing the entire spherical environment – meaning the action wouldn’t be broken down into sizeable chunks, and we wouldn’t have editing to rely on as a safeguard. This effect would potentially be visible for the entire duration of the scene.
Anticipating this in advance gave us a great advantage. Given our time and budget constraints I recommended limiting the visibility of the glowing amulet either by making it glow intermittently or covering it while not in use. The director decided to have it stashed in a pocket for the majority of the scene – limiting the effect to when it was truly needed.

This saved us a great deal of work that wasn’t crucial for the story. It also served as an important thought-exercise that made the entire crew more mindful of similar pitfalls and on the look-out for similar opportunities, which brings up another challenge of working in this medium:

WHY IS EVERYONE SO INEXPERIENCED

Creating compelling Virtual Reality experiences has been attempted several times in the last few decades, but it wasn’t until now that the entertainment industry started taking it more seriously. Even established filmmakers are making their first footprints in virtual reality, and there aren’t many noteable VR films for inspiration. This makes every VR creator somewhat of an experimental filmmaker, working with a high risk of failure or undesired outcomes.
Whether done at your fault or against your advice, fixing VR-related issues could become your responsibility, adding new challenges to your work.

Here are examples for mishaps that may originate from lack of experience in VR storytelling:

Failing to direct viewers’ focus. 360 Video VR lets viewers look around freely. This freedom introduces the risk of missing out on key information. Guiding viewers’ focus through a 360-degree panoramic view is much harder than through a traditional screen. With tools like framing and editing either gone or severely limited, directors that never faced this challenge might fail to recognize the importance of preparation and pre-visualization. Proper workflow demands thorough planning of actors’ positions and movements, careful timing, visual composition that dissuades viewers from looking away, and various other approaches. Giving in to the temptation of positioning parallel action around the 360 camera can cause confusion and frustration for viewers as they can only look in one direction at a time.
Because live-previewing is not yet widely available, such mistakes might remain unnoticed until after the shoot is over when the director gets to review footage in a VR headset. By then it might be too late to re-shoot the footage, turning the problem over to post-production. Certain things can be fixed in post-production, by manipulating the timing of certain events, adding CG elements to grab viewers’ attention and aim them in the right direction, etc. But such fixes shouldn’t to be taken lightly as they are equally affected by the heightened complexity the VR medium introduces. Finally, in the case of an unfixable mistake, you could end up creating an effect knowing that a large chunk of viewers might miss it by looking elsewhere.

Remaining too stationary. I mentioned earlier that VR tends to incorporate long uninterrupted shots, and that camera movements tend to be avoided. This is mainly because early attempts to move cameras in VR caused nausea, and jump-cuts caused disorientation.
With the advancement of VR headsets, growing understanding of the causes of VR-sickness, and understanding of orientation cues, these side-effects can now be avoided without much compromise. This requires a bit of testing and prototyping, but the benefits are great compared to the limitations of a stationary camera and inability to cut. Needless to say, reintroducing cuts can help shrink the length of VFX shots, and moving the camera can further optimize viewing angles and save costs.

Inefficient visual vocabulary. Even experienced directors could find themselves out of their element when working in VR for the first time, and struggle to visualize the project they’re creating. Pre visualization tools are extremely helpful. If they’re not used for any reason, the director’s ability to envision their creation, let alone communicate it outwards, may be hindered.
This could not only disrupt your ability to plan and execute effects efficiently, but force you to backtrack and pre-visualize using final assets, while constrained to footage that’s far from optimal.

Relying on outdated concepts of VFX workflows. Generally speaking, directors who are familiar with VFX workflows are great to work with. In some cases they do your job for you by anticipating and dodging certain VFX traps in the early planning stages of certain scenes. Entering the VR realm alongside such a director can be a great experience of joint discovery. But over-confidence could lead a director to prep and shoot without a VFX supervisor, assuming the same rules that apply to traditional video apply to VR as well. Even if they follow every rule in the rulebook, without having researched and tested various tools and calculated workloads and bottlenecks, they are shooting in the dark. While it’s always best to err on the side of caution when accepting VFX tasks you didn’t on-set supervise, it’s especially true in VR.

Being overly ambitious. For all the reasons mentioned, and especially the ones that I’ll be getting to shortly, creating a VR experience is already an ambitious undertaking that can be surprisingly challenging and mentally taxing. But that’s not going to stop dreamers from dreaming, and you may end up working for a director who believes anything is possible. Therefore it’s important to communicate from the very beginning the compounded complexities involved in creating VFX in VR, and to prepare your collaborators to unexpected limitations that are going to be encountered.

EXPERIENCE WILL BE THRUST UPON YOU

Everyone is fairly new to VR and many of the restrictions and pitfalls are being discovered for the first time. The good news is that lessons are being learned and documented and a growing number of filmmakers are gaining experience. Old perceptions will inevitably be replaced by new ones, and our jobs as filmmakers and VFX creators will become easier as a result.

Beyond conceptual challenges, VR reintroduces a healthy amount of technical challenges as well. Even a fairly simple effect like adding glowing light-rays to an amulet can be quite time-consuming when you’re working in stereoscopic 6K resolution.

Part 2 of this post lists many of the technical challenges facing VFX creators working on VR productions.

Proceed to part two ->

How to give notes on visual effects, effectively?

 

Creating visual effects is a collaborative process. Visual effects companies, skilled as they may be, depend greatly on their clients’ ability to relay notes effectively – and ever so often clients are lacking in that respect.

As a visual effects supervisor I know that helping clients articulate their notes can often take as long as implementing them. Having sat in the client’s chair as well, I’ve experienced first-hand the frustration of being misinterpreted, and the absence of information on what tools or techniques I could use to articulate my notes more effectively (I’ve yet to see similar how-to’s online).

Since directors rarely see other directors at work there is little cross-pollination of ideas, workflows and experiences among them. If it’s hard to articulate subtle notes on a moving visual effect, it’s even harder without having seen others do it successfully first. Therefore, I consider myself lucky to get to see other directors/clients do it, and learn from their successes and failures alike.

With this post I hope to share some of my insights from working with numerous clients and point out several ways to give notes on visual effects. Even if none of this is new to you, going through this might remind you of a tool you’ve been ignoring! Either way, I intend to keep updating this over time so feel free to suggest additions. Hopefully this can help clients and vendors save a lot of frustration!

Here are ways to give notes on visual effects more effectively:

Listen to your vendor

Sounds basic, but some clients forget that their vendor probably has more experience communicating with clients then they do communicating with vendors. This doesn’t mean that vendors are always right, or that they know better than you what’s good for your project – not at all. But when it comes purely to communication, your vendor likely has seen other clients struggle the same way you do, and may have valuable suggestions and best practices for communicating notes.

Only one point-person gives notes

Here’s an example of a simple way to give a note:

It’s common for a client to consult their peers when giving notes, whether it’s a producer, studio exec, production-designer, DOP etc. When doing that, however, It’s usually better to keep those conversations internal, and present the vendor with final decisions in a definitive way, by the pre-assigned point person (i.e. the director). For vendors, receiving notes from multiple entities simultaneously can be incredibly confusing and frustrating:

You can see how receiving different notes from different sources about the same task might send your vendor on a goose-chase for a clear direction, causing delay and inefficiency. If a single point person is not established, and notes arrive from multiple people, the vendor can’t trust any specific feedback, and might hold work until a firm direction is given.

Distinguish discussions from directions

It can be useful to have the vendor weigh-in on creative discussions, whether in a creative meeting in person, or an electronic discussion. That said, it’s extremely important to distinguish creative discussions from client notes.

Otherwise, things quickly become murky and confusing. Imagine a scenario where instead of giving your vendor decisive notes, you invite them to a shared Google presentation, where multiple people on your team are each writing their thoughts:

As you can see, it’s really hard for a vendor to deduce such a document into a task list:

  • notes contradict each-other.
  • note pending a team-member’s response.
  • New reference added, but is it approved?
  • Unclear reference
  • Not clear who should respond

Generally, if you’ve had an in-depth creative meeting with your vendor early on, their attendance in another “creative brainstorming” meeting isn’t crucial, and it’s best to present them with decisions, or have the point-person consult with them separately (Unless you decide to scrap everything and start from scratch).

When in doubt, ask for clarification

During the process of creating visual effects, a vendor might ask for your opinion on something, without explaining what it is or how far along it is in the process. As a clients you are probably eager to see results, and are likely to assume you’re being shown a completed shot – even if it’s not the case. By doing that though, you might be giving notes on things that the vendor hasn’t even touched yet, instead of focusing on what is relevant at that stage.

It’s really your vendor’s responsibility to indicate what’s being sent: “Preview shot: colors not final”, “Final shot before polishes”, etc, but since you’re both in it together, When in doubt – ask for clarification!

E-mail is your friend (If you use it properly)

Writing notes and followups in one continuous e-mail thread (Gmail does that automatically), offers an efficient way to keep track and review the process. Make sure the “Subject” field properly describes what most notes are for:

Strategize your e-mail threads: don’t cram too much into a single e-mail thread (try to keep it task-specific), while at the same time avoid having too many active e-mail threads simultaneously:

Keeping e-mail threads subject-specific is a collaborative effort, as even an innocent mishap can derail a conversation off point or split the thread and cause a mess. Extra care and a conscious effort of keeping things neat and organized will go a long way in maintaining productive dialog and keeping your vendor focused and happy.

E-mail isn’t your only friend

While e-mail is most clients’ default/favorite communication tool, don’t forget you can also call or meet in person. Direct interactions help remove perceptual gaps and create a common vocabulary, they save time and money, and can be fun!

 Just make sure you keep the meeting quick and efficient.

Just make sure you keep the meeting quick and efficient.

When meeting in person isn’t physically possible, consider video-conferencing using Skype, which allows screen-sharing: A very powerful tool to discuss moving visuals.

It’s worth looking into additional tools such as “cineSync” and “frankie”, which offer great review and annotation tools, with perfect visual fidelity and sync.

Consider using other, less obvious tools, that can be good for communication and collaboration. For instance, I sometimes share Google Presentations with clients – allowing them to comment on images, add arrows, circles and notes, and throw in references and links, all on one shared space that is accessible from anywhere.

Show, don’t tell

Since e-mail is the most common communication tool with various vendors, clients tend to automatically type their feedback in textual form. This may be sufficient in certain situation, but it’s good practice to explore other ways of articulating a note more precisely – you’ll often find that it requires showing, rather than telling.

vs:

In this example, the client wants the vendor to close-in on a certain part of the ship. Articulating his desire in words text can only achieve a limited level of precision, while drawing a frame on top of a recent render, provides a guide that can’t be misinterpreted.

The same applies to movement:

vs:

 I used an Animated GIF as reference, but you can record yourself performing as well if you can't find the right reference.

I used an Animated GIF as reference, but you can record yourself performing as well if you can’t find the right reference.

Again, words lack the specificity that a video reference enables. Nowadays recording a video on the phone and attaching it to an e-mail takes 20 seconds at most and can save 2-3 days of iterations, but more importantly guarantees you get exactly what you need. Furthermore, your vendor will appreciate you more for taking the time and providing articulate notes, showing that you care about the project and their ability to deliver quality material.

A cool tool that one of my clients used and I’ve adopted myself, is video-screen-capture. Apple users can use QuickTime Pro to record their screen while playing back shots, pausing, pointing at key areas and relaying their notes.  PC users can use a tool called OBS (Open Broadcaster Software), which does the same thing.

Create a vocabulary.

Experienced clients should use any form of artistic expression possible to communicate their vision. These can be paintings, poems, dance numbers, songs, films, etc. The bigger your collection of audio-visual references is, the easier it is to articulate a specific style to your product. That said, make sure your vendor doesn’t drown under piles of references, by keeping it organized and task-specific.

To make this collection an even more powerful tool, label these references in such a way that you can more easily refer back to them.

You can use proper definitions, or make up an entirely new terminology – all that matters is that everyone is on the same page and when you use a term like “Edged Swirl”, you and your vendor both have the same visual in mind.

In conclusion

The more effective we are at directing our vendors, the faster and more satisfying the process should be for everyone. Furthermore, with the world of entertainment evolving as rapidly as it is (virtual reality and augmented reality are entering the markets as we speak) our ability to direct highly-technical vendors efficiently may be crucial to our future in the field.

Please feel free to add your thoughts and ideas in the comments below, or contact me directly through this website.

How to save on VFX costs?

As any film professional knows by now, visual effects have gradually become cheaper to create. This is due to many factors. Increasing demand from everywhere on the film-production spectrum lead to a rapid rise in numbers of VFX companies around the globe, including countries with low labor-costs. Many other territories offer tax incentives to lure productions in, and technological breakthroughs allow VFX to be created faster and by fewer artists.

Alas, while visual effects have definitely gotten cheaper, producers are quite often presented with bids that far exceed their available budget, and are forced to revise the film or make painful compromises in quality. Meanwhile, more and more VFX vendors are forced to work with shrunk budgets, short deadlines and frustrated clients hit in the face by reality.

The obvious reason for this is poor planning and budgeting. But still, time and time again, producers are thrust hastily into production, with a forced self-conviction that a VFX company will ultimately save the day. A widespread perception of the VFX process as an impenetrable “Black Box” only strengthens this conviction, as well as illusive tales of VFX-heavy productions that were saved last minute, on the cheap!

The fact is, while the process of making VFX can be complicated and convoluted at times, it is not as difficult to comprehend as some professionals lead their clients to believe. One might find some twisted logic in keeping producers misinformed and weary of the VFX process. But in reality, knowledgeable and confident producers are more likely to deliver well-constructed shot elements, making the VFX process more efficient and the final result better AND cheaper.

That said, the best advice for a producer, hoping to keep VFX costs low and enjoy a fruitful collaboration, is to lock-in a VFX company as early in pre-production as possible, and plan ahead together. Having a VFX supervisor on location scouts, looking out for invisible pitfalls and suggesting efficient workarounds is invaluable. Planning sequences, drawing storyboards, pre-visualizing together, are all ways to ensure getting the most bang for a buck – but more importantly, to empower the filmmakers to go bolder and more ambitious than they might have otherwise.

Producers often hold-off on contracting a VFX company until after principal photography, because they’d rather have fewer moving parts to deal with in production. That’s backwards thinking: there’s nothing more frustrating than learning you could have saved thousands of dollars by turning the camera 15 degrees left, or learning that an element you spent two days and thousands of dollars capturing on camera, could have been added in post in two hours. You’d be surprised by how often mistakes like this happen – and they almost always can be avoided by consulting a VFX supervisor ahead of time, and having one present while shooting.

But there’s an even better way to save on VFX cost: hiring a director with hands-on experience in VFX production, who can actually DO some of the VFX him/herself. The advantages here are manifold. As motivated and passionate as the VFX vendor can get, a director will usually be much more likely to go above and beyond for his own project. A “VFX director” will usually require less time to communicate his vision to a VFX crew, and no time, if he is creating the effects himself. Of course, having VFX experience should never be the only qualification in a director, but good directors with VFX experience are becoming easier to find – due to the increased accessibility of VFX tools and technology.

Finally, if I had to put my finger on the number one reason productions spend more than they should on VFX – it would be lack of communication on the VFX company’s part, and resistance to new knowledge on the producers’ part. I know producing is tedious and stressful, but it also offers constant stimulation, ever-evolving challenges and never-ending opportunities to learn and grow. The same applies to VFX creation. Ironically, the more fun everyone has with it, the cheaper and more efficient it becomes.