Categories
Uncategorized

Believability and Persistence in AR

There are two important concepts arriving in smartphone AR technology at the moment: believability, and persistence.

Believability comes from digital and physical objects appearing to naturally occupy a space together: for example, if you move a physical object in front of a digital object, the digital one should appear to be partly obscured. In AR parlance this is occlusion, or blending.

You can see the advantage of blending in the two photos at the top of this post: in the photo on the left I’ve disabled blending so the image of the goat appears in front of the table, flattening the depth in the picture; in the second, blending is enabled so the goat appears occluded by the table, as you’d naturally expect it to be; it’s believable.

This is generally done by building a 3D ‘depth map’ of the local environment with a camera; objects near the camera are distinguished from objects further away, in three dimensions. It can be done with a single camera sensor, or with higher fidelity using dedicated hardware like the LIDAR in the latest iPad Pro.

Google’s ARCore Depth API, Apple’s ARKit Depth API, Niantic’s Reality Blending, and Microsoft’s HoloLens Spatial Mapping are all examples of software depth maps.

Persistence is, as it sounds, about making things stay in place, especially between user sessions; if I place a dancing hotdog on the corner of my road then close my phone, when I come back to the same spot later the hotdog will still be dancing there for me to see.

Fairly loose persistence can be done using geolocation, but it means only that I’ll see the hotdog again in roughly the same place, not exactly the same place. More precise persistence uses a visual positioning system; that is, a system that knows where you were and what you were looking at. It’s like, as you build the depth map with the camera and add a digital object into it, you anchor that object, then share the depth map and the anchor so that any other camera can see the same digital object in the same physical space.

Google’s ARCore Cloud Anchors, Apple’s ARKit Location Anchors, Snapchat’s Local Lenses, and Microsoft’s Azure Spatial Anchors are all software implementations of persistence.

Snapchat’s Local Lenses have persistence

AR that’s believable and persistent is a keystone of the effort to build a software layer over the physical world: the metaverse, “a future state of, if not quasi-successor to, the Internet”, which would:

Revolutionise not just the infrastructure layer of the digital world, but also much of the physical one, as well as all the services and platforms atop them, how they work, and what they sell.

The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite

Also published on Medium.