Neural Radiance Fields forever

A few weeks ago, Google introduced Immersive View for Maps, a high-quality 3D flythrough of popular spots, which shows day / night and real-time weather conditions.

It’s pretty cool. But even cooler is the technology behind it.

NeRF (Neural Radiance Fields) is a new approach to photogrammetry.

Here’s an example of a NeRF scene:

Traditional photogrammetry builds a 3D model with photo textures. NeRF adds what’s essentially a ‘light model’ from a bunch of input photos (eg satellites, Street View, user photos). With a light field constructed, you can ‘rebuild’ the scene and explore it from multiple angles.

(Apologies to anyone who works in the field for that mangled description.)

In practice, that means that you can create a 3D scene from not many photos, and it preserves/recreates reflections, shadow, and depth, so you can see objects through gaps in other objects, see objects from angles not captured, and so on.

Just a few photos is a NeRF (sorry).

Check out the ‘real-time’ reflections in this scene:

And then, taken to the next level:

You can scale this up to large areas by combining a load of NeRFs (it helps if you own a self-driving car fleet):

In essence, the first steps in turning the whole world into a GTA map.

Originally tweeted by Peter Gasston (@stopsatgreen) on 9 June, 2022.

Also published on Medium.