Twitter is awash with impressive demos of augmented reality using Apple’s ARKit or Google’s ARCore. I think it’s cool that there’s a palpable sense of excitement around AR—I’m pretty excited about it myself—but I think that there’s perhaps a little too much early hype, and that what the demos don’t show is perhaps more suggestive of the genuinely exciting future of AR.
Below is an example of the demos I’m talking about — a mockup of an AR menu that shows each of the individual dishes as a rendered 3D model, digitally placed into the environment (and I want to make clear I’m genuinely not picking on this, just using it as an illustration):
— Made With ARKit (@madewithARKit) June 30, 2017
This raises a few questions, not least around delivery. As a customer of this restaurant, how do I access these models? Do I have to download an app for the restaurant? Is it a WebAR experience that I see by following a URL?
There’s so much still to be defined about future AR platforms. Ben Evans’ post, The First Decade of Augmented Reality, grapples with a lot of the issues of how AR content will be delivered and accessed:
Do I stand outside a restaurant and say ‘Hey Foursquare, is this any good?’ or does the device’s OS do that automatically? How is this brokered – by the OS, the services that you’ve added or by a single ‘Google Brain’ in the cloud?
The demo also raises important questions about utility; for example, why is seeing a 3D model of your food on a table better than seeing a 3D model in the web page you visit, or the app you download? Or, why is it better even than seeing a regular photo, or just reading the description on the menu? Do you get more information from seeing a model in AR than from any other medium?
Matt Miesniks’ essay, the product design challenges of AR on smartphones, details what’s necessary to make AR truly useful, and it proceeds from a very fundamental basis:
The simple question “Why do this in AR, wouldn’t a regular app be better for the user?” is often enough to cause a rethink of the entire premise.
And a series of tweets by Steven Johnson nails the issue with a lot of the demos we’re seeing:
Apple's AR demos look amazing, but the "reality" part of AR is an afterthought in most of them, not something core to the experience. (1)
— Steven Johnson (@stevenbjohnson) September 13, 2017
Again, I’m not setting out to criticise the demos; I think experimentation is critical to the development of a new technology—even if, as Miesnieks points out in a separate essay, a lot of this experimentation has already happened before…
I’m seeing lots of ARKit demos that I saw 4 years ago built on Vuforia and 4 years before that on Layar. Developers are re-learning the same lessons, but at much greater scale.
But placing 3D objects into physical scenes is just one narrow facet of the greater potential of AR. When we can extract spacial data and information from an image, and also manipulate that image digitally, augmented reality becomes something much more interesting.
In Matthew Panzarino’s review of the new iPhones he talks about the Portrait Lighting feature—which uses machine learning smarts to create studio-style photography—as augmented reality. And it is.
AR isn’t just putting a virtual bird on it or dropping an Ikea couch into your living room. It’s altering the fabric of reality to enhance, remove or augment it.
The AR demos we’re seeing now are fun and sometimes impressive, but my intuition is that they’re not really representative of what AR will eventually be, and there are going to be a few interesting years until we start to see that revealed.
Also published on Medium.