May 12, 2023
·

Picture North: Leading VFX with NeRF Tech

Loring Weisenberger
Free ebook
Download Now
Free Template
Download Now
About the author
Loring Weisenberger

Loring is a Los Angeles-based writer, director, and creative producer. His work has been commissioned by a diverse range of clients- from Havas Worldwide to Wisecrack, inc.- and has been screened around the world. Through a background that blends project development with physical production across multiple formats, Loring has developed a uniquely eclectic skillset as a visual storyteller.

Follow the Wrapbook Team

Disclaimer

At Wrapbook, we pride ourselves on providing outstanding free resources to producers and their crews, but this post is for informational purposes only as of the date above. The content on our website is not intended to provide and should not be relied on for legal, accounting, or tax advice.  You should consult with your own legal, accounting, or tax advisors to determine how this general information may apply to your specific circumstances.

Last Updated 
May 12, 2023

Neural Radiance Fields (NeRFs for short) are fully immersive 3D environments that are generated from a handful of photos in a matter of minutes. They’re the latest leap forward in visual effects technology and have the potential to dramatically alter how we handle post-production.

To keep you ahead of the curve, this post will take you on a crash course through all things NeRF. We’ll show you what Neural Radiance Fields are, why producers should pay attention to them, and how Wrapbook client Picture North recently used NeRFs to expand the frontier of commercial VFX.

What is a Neural Radiance Field?

The term “Neural Radiance Field” sounds like science fiction, but this groundbreaking technology is rooted in science fact. While we’ll mostly steer clear of technical details in this post, it’s a good idea to kick things off with a scientific definition.

Here’s how scientists from UC Berkeley and Google Research introduced NeRFs in their original research presentation:

“[A] method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views.”

Translation? Neural Radiance Fields enable you to generate new viewing angles on a scene by running a limited set of images from that scene through some hardcore mathematics. 

In other words, you take pictures of an environment and feed them into a computer program. The program generates a NeRF model. You can then manipulate that NeRF to view the environment from any angle you want, including angles that do not appear in your original set of pictures.

Let me repeat that. With Neural Radiance Fields, you can look at scanned objects or environments in ways that you have literally never looked at them before. 

You can see a NeRF in action with this quick demo video from NVIDIA:

Now, if you have any background in VFX, I know what you’re probably thinking.

Isn’t that basically a 3D photoscan?

The short answer is no. While there are similarities in producing a NeRF and a typical photoscan, the difference between the two is exactly what makes Neural Radiance Fields so impressive.

3D photoscans are created through a process called photogrammetry. Photogrammetry uses a set of photographs taken of one object from multiple angles to reconstruct a 3D model of that object. 

If you were to take a few pictures of your favorite coffee mug and run them through photogrammetry software, you would end up with a 3D model (aka photoscan) of the mug. 

Sounds familiar, right?

Except photogrammetry is an incredibly direct and, therefore, limited process. It essentially takes what you see in each photo, finds commonalities between different photos, then stitches the photos together in 3-dimensional space to create a model. 

The process can produce amazing results but it also comes with significant weaknesses.

What weaknesses? Physical obstructions and certain characteristics of light, like reflections or deep shadows. They confuse photogrammetry software by seeming to modify the physical structure of the photographed object from angle to angle. 

For example, if you were to photograph a mirror, the image reflected in the mirror will change every time you move, no matter how small the movement. 

We take this phenomenon for granted in day-to-day reality, but it’s mind-blowing for photogrammetry software. As a result, the software produces an incomprehensible (and generally unusable) 3D model.  

Neural Radiance Fields, by contrast, have no problem with reflections, shadows, or even obstructions. 

Why? The reason is simple. 

NeRFs use NeRF AI to build an image, not copy it

Neural Radiance Fields do not reconstruct a 3D model from smaller pieces of 2D images. Instead, they use 2D images to learn and infer what a 3D space looks like. They leverage artificial intelligence to build a new space based on what it’s learned from a dataset of 2D photos.

To illustrate, let’s return to the mirror example. 

When a NeRF encounters a mirror, it doesn’t try to scan it as an object. Rather, it captures radiance values. These values essentially correspond to the amount of light the NeRF senses from each image. NeRF AI then process the captured radiance values into a radiance field.   

If you were to move around the mirror within the 3D space of the radiance field, you would see accurate reflections in the mirror’s surface as inferred by the NeRF AI from any angle. 

This result is impossible with traditional photogrammetry. 

The same goes for images with refracted light, deep shadows, visual obstructions, and many other challenging scenarios. The machine learning behind NeRFs makes them more flexible and more rapidly immersive than photoscans. 

When you move within a NeRF, lighting and reflections accurately shift because the NeRF AI has learned and predicted what they would look like from your new perspective. 

Check out this video from Corridor Digital for a visual demonstration:

NeRF technology is undoubtedly cool, but “cool” only gets you so far. Let’s talk about whether or not NeRFs are actually useful. 

Why should producers care about NeRFs?

Neural Radiance Fields are a disruptive technology. The tools are still brand new, and we’ll likely see them evolve a great deal before they become commonplace. 

Their potential, however, has already struck a major chord in the industry. Writing purely from a producer’s point-of-view, we can sum up the primary appeal with a single sentence.

NeRFs could save you serious time and money

As NeRF technology improves, it could significantly reduce the amount of labor required during post-production. In theory, NeRFs simplify or eliminate several time-consuming tasks associated with 3D VFX. Modeling, texturing, optimization, lighting, and more could potentially be done faster and easier. 

You also have to consider the other potential advantages of machine learning. For example, NeRF AI could one day enable filmmakers to remove an object from a scene with only a text description. With NeRF models, you might be able to change the time of day or type of weather in a shot without having to relight or texture anything.

The possibilities are virtually endless. It’s not difficult to see why Neural Radiance Fields could streamline VFX workflows on projects large and small. 

You could shave days or weeks from your post-production schedule. Alternatively, you could exchange that saved time to get more bang for your VFX budget. 

Plus, the evolution of NeRF technology promises a flood of new creative tools for artists and technicians. NeRFs offer a fresh method for interacting with images, which will in turn create new ways for filmmakers to tell stories. 

Just check out this video rundown of only a small selection of emerging NeRF research tools:

Again, these are early days for Neural Radiance Field technology. However, the potential of NeRF technology is so compelling that we’re already seeing them put to real-world use. In fact, NeRFs were the centerpiece of a recent commercial for a major brand.

And it was made by a Wrapbook client. 

Case study: How Picture North used NeRFs for McDonald’s Lunar New Year 2023

Picture North is a Chicago-based production company that specializes in pushing boundaries and telling human-centric stories. They partnered with McDonald’s and content creator Karen X. Cheng to celebrate the Lunar New Year.

In the process, they created the first major TV commercial to use a Neural Radiance Field.

The commercial begins with the camera rushing through a McDonald’s window into a static scene of a family eating inside the restaurant. That scene is - you guessed it - a Neural Radiance Field. 

The rest of the spot stitches together several NeRF models with cleverly disguised cuts, until we zoom through a Happy Meal box to join Karen in a studio space. 

Filming the scene with a Steadicam-mounted camera for approximately 60 seconds created the NeRF models inside the restaurant. During that time, the family seated at the table had to remain completely still. Any significant movement would have resulted in a blurry image of their face or body. 

The individual frames recorded during the minute-long takes were fed into Luma AI, a NeRF-generating app available on the web and iOS devices. The production team was then able to manipulate the resulting NeRF models in post, using virtual production tools to add custom camera paths at will. 

You can see more behind-the-scenes material in this Twitter thread.

The production is striking because it’s both technologically advanced and surprisingly straightforward. The filmmakers leveraged high-tech tools to create a hands-on creative experience.

As a result, the final commercial is innovative and unique. It possesses an unusual aesthetic that pairs perfectly with the simultaneously nostalgic and futuristic spirit of McDonald’s Lunar New Year campaign. 

If NeRF models continue to evolve, this 30-second McDonald’s commercial could turn out to be a VFX landmark. Before we enter the age of real-life holodecks, matrixes, and danger rooms, let’s take a look at how filmmakers in the here and now can start experimenting with NeRFs. 

How to make a Neural Radiance Field

From a technical point-of-view, creating a NeRF is complicated and code-heavy. Fortunately for our purposes, the basic process is (relatively) straightforward.

We can boil it down to three basic steps:

1. Acquire data (by taking pictures)

To create a NeRF model, you must first create the dataset on which your NeRF will be based. In other words, you’ll need to take a large volume of photos from as many angles as possible.

Ideally, the subjects in your NeRF should remain as still as possible during the capture process. This will prevent them from confusing the AI and becoming blurry or broken into tiny pieces. 

The exact number of photos necessary to create a NeRF is still evolving, but the current consensus is that you should include at least 50 pictures in your dataset. As a general rule, the more images you take, the more accurate your NeRF.

2. Create the NeRF

After you take enough photos, you need to let your computer turn them into a NeRF. For now, this is the hard part. You might ask, “What are the limitations of NeRFs?” The current answer is that they’re complicated to create.

Because Neural Radiance Fields are relatively new, methods of creating them often require you to manually compile code and train an AI on your own. To that end, Nerfstudio’s API is popular and well-documented. Alternatively, NVIDIA offers an excellent guide for getting started with NVIDIA Instant NeRFs

However, the most accessible option is currently Luma AI, mentioned above for its use in the McDonald’s spot.

Luma AI is available as a web and iOS mobile app. You can sign up and start creating 3D scenes for free. New pricing models may be announced at a later date, but Luma AI is currently ready to go out of the box, requiring no payment or knowledge of coding. All you have to do is input your images and let Luma do its thing. 

3. Process shots from the NeRF

For filmmakers, a 3D object or environment is not the end goal. We need to be able to create 2D shots within those environments for use in our cinematic projects. To do this with a NeRF, there are several options. 

Depending on the system through which you created your NeRF, you can output various file types that you can then use in other programs with 3D camera tools. 

In other words, you could import your NeRF into software like Blender or Unreal Engine, then create whatever shots you want within that software. 

Alternatively, Luma AI allows you to create shots directly within the app’s interface. You can use keyframes to animate a 3D camera within your NeRF. The resulting shot can be exported for use on its own or as part of a larger project within an editing program like Adobe Premiere

And that’s all it takes. You officially have a working NeRF model. Everything after is up to you and your imagination as a filmmaker. 

Wrapping up

Neural Radiance Fields are just beginning to find their place in the professional filmmaker’s toolkit. As these cutting-edge tools become more accessible, it’ll be up to filmmakers to push their creative boundaries. 

Here at Wrapbook, we can’t wait to see our award-winning clients like Picture North find new ways to succeed as the future of filmmaking continues to emerge.

Stay In The Know

Sign up for the Wrapbook newsletter where we share industry news along with must-know guides for producers.

Book a Demo

Meet with a Wrapbook expert to create a plan for your payroll.