On the iPhone X’s notch and being distinctive

I’ve been thinking about the ‘notch’ in the iPhone X. In case you’ve no idea what I’m talking about, the X has an ‘all-screen’ design; the  home button is gone, and the front of the device no longer has bezels above and below the screen except for a curving indent at the top which holds image sensors necessary for the camera and the new facial authentication feature.

It seems somehow like a design compromise; the sensors are of course necessary, but it feels like there could have been a full-width narrow bezel at the top of the device rather than the slightly odd notch that requires special design consideration.

But my thought was: if they chose a full-width bezel, what would make the iPhone distinctive? Put one on the table face-up next to, say, a new LG or Samsung Galaxy phone, how could you tell, at a glance, which was the iPhone?

Two rows of icons for smartphone functions, using an outline that looks similar to an iPhone
icons from the the noun project

The iPhone’s single button design is so distinctive that it’s become the de facto icon for smartphones. Without it, the phone looks like every other modern smartphone (until you pick it up or unlock it). The notch gives the X a unique look that continues to make it unmistakably an Apple product, even with the full-device screen. It makes it distinctive enough to be iconic, and to protect legally—given Apple’s litigious history, not a small consideration.

Of course it requires more work from app designers and developers to make their products look good, but Apple is one of the few (perhaps only) companies with enough clout, and a devoted following, to put in the extra work—you can’t imagine LG being able to convince Android app makers to put in the extra shift in that way. So perhaps its still somewhat of a design kludge, but it’s a kludge with purpose.

Augmented reality demos hint at the future of immersion

Twitter is awash with impressive demos of augmented reality using Apple’s ARKit or Google’s ARCore. I think it’s cool that there’s a palpable sense of excitement around AR—I’m pretty excited about it myself—but I think that there’s perhaps a little too much early hype, and that what the demos don’t show is perhaps more suggestive of the genuinely exciting future of AR.

Below is an example of the demos I’m talking about — a mockup of an AR menu that shows each of the individual dishes as a rendered 3D model, digitally placed into the environment (and I want to make clear I’m genuinely not picking on this, just using it as an illustration):

This raises a few questions, not least around delivery. As a customer of this restaurant, how do I access these models? Do I have to download an app for the restaurant? Is it a WebAR experience that I see by following  a URL?

There’s so much still to be defined about future AR platforms. Ben Evans’ post, The First Decade of Augmented Reality, grapples with a lot of the issues of how AR content will be delivered and accessed:

Do I stand outside a restaurant and say ‘Hey Foursquare, is this any good?’ or does the device’s OS do that automatically? How is this brokered – by the OS, the services that you’ve added or by a single ‘Google Brain’ in the cloud?

The demo also raises important questions about utility; for example, why is seeing a 3D model of your food on a table better than seeing a 3D model in the web page you visit, or the app you download? Or, why is it better even than seeing a regular photo, or just reading the description on the menu? Do you get more information from seeing a model in AR than from any other medium?

Matt Miesniks’ essay, the product design challenges of AR on smartphones, details what’s necessary to make AR truly useful, and it proceeds from a very fundamental basis:

The simple question “Why do this in AR, wouldn’t a regular app be better for the user?” is often enough to cause a rethink of the entire premise.

And a series of tweets by Steven Johnson nails the issue with a lot of the demos we’re seeing:

Again, I’m not setting out to criticise the demos; I think experimentation is critical to the development of a new technology—even if, as Miesnieks points out in a separate essay, a lot of this experimentation has already happened before

I’m seeing lots of ARKit demos that I saw 4 years ago built on Vuforia and 4 years before that on Layar. Developers are re-learning the same lessons, but at much greater scale.

But placing 3D objects into physical scenes is just one narrow facet of the greater potential of AR. When we can extract spacial data and information from an image, and also manipulate that image digitally, augmented reality becomes something much more interesting.

In Matthew Panzarino’s review of the new iPhones he talks about the Portrait Lighting feature—which uses machine learning smarts to create studio-style photography—as augmented reality. And it is.

AR isn’t just putting a virtual bird on it or dropping an Ikea couch into your living room. It’s altering the fabric of reality to enhance, remove or augment it.

The AR demos we’re seeing now are fun and sometimes impressive, but my intuition is that they’re not really representative of what AR will eventually be, and there are going to be a few interesting years until we start to see that revealed.