On the iPhone X’s notch and being distinctive

I’ve been think­ing about the ‘notch’ in the iPhone X. In case you’ve no idea what I’m talk­ing about, the X has an ‘all-screen’ design; the  home but­ton is gone, and the front of the device no longer has bezels above and below the screen except for a curv­ing indent at the top which holds image sen­sors nec­es­sary for the cam­era and the new facial authen­ti­ca­tion fea­ture.

It seems some­how like a design com­pro­mise; the sen­sors are of course nec­es­sary, but it feels like there could have been a full-width nar­row bezel at the top of the device rather than the slight­ly odd notch that requires spe­cial design con­sid­er­a­tion.

But my thought was: if they chose a full-width bezel, what would make the iPhone dis­tinc­tive? Put one on the table face-up next to, say, a new LG or Sam­sung Galaxy phone, how could you tell, at a glance, which was the iPhone?

Two rows of icons for smartphone functions, using an outline that looks similar to an iPhone
icons from the the noun project

The iPhone’s sin­gle but­ton design is so dis­tinc­tive that it’s become the de fac­to icon for smart­phones. With­out it, the phone looks like every oth­er mod­ern smart­phone (until you pick it up or unlock it). The notch gives the X a unique look that con­tin­ues to make it unmis­tak­ably an Apple prod­uct, even with the full-device screen. It makes it dis­tinc­tive enough to be icon­ic, and to pro­tect legally—given Apple’s liti­gious his­to­ry, not a small con­sid­er­a­tion.

Of course it requires more work from app design­ers and devel­op­ers to make their prod­ucts look good, but Apple is one of the few (per­haps only) com­pa­nies with enough clout, and a devot­ed fol­low­ing, to put in the extra work—you can’t imag­ine LG being able to con­vince Android app mak­ers to put in the extra shift in that way. So per­haps its still some­what of a design kludge, but it’s a kludge with pur­pose.

Augmented reality demos hint at the future of immersion

Twit­ter is awash with impres­sive demos of aug­ment­ed real­i­ty using Apple’s ARK­it or Google’s ARCore. I think it’s cool that there’s a pal­pa­ble sense of excite­ment around AR—I’m pret­ty excit­ed about it myself—but I think that there’s per­haps a lit­tle too much ear­ly hype, and that what the demos don’t show is per­haps more sug­ges­tive of the gen­uine­ly excit­ing future of AR.

Below is an exam­ple of the demos I’m talk­ing about — a mock­up of an AR menu that shows each of the indi­vid­ual dish­es as a ren­dered 3D mod­el, dig­i­tal­ly placed into the envi­ron­ment (and I want to make clear I’m gen­uine­ly not pick­ing on this, just using it as an illus­tra­tion):

This rais­es a few ques­tions, not least around deliv­ery. As a cus­tomer of this restau­rant, how do I access these mod­els? Do I have to down­load an app for the restau­rant? Is it a WebAR expe­ri­ence that I see by fol­low­ing  a URL?

There’s so much still to be defined about future AR plat­forms. Ben Evans’ post, The First Decade of Aug­ment­ed Real­i­ty, grap­ples with a lot of the issues of how AR con­tent will be deliv­ered and accessed:

Do I stand out­side a restau­rant and say ‘Hey Foursquare, is this any good?’ or does the device’s OS do that auto­mat­i­cal­ly? How is this bro­kered — by the OS, the ser­vices that you’ve added or by a sin­gle ‘Google Brain’ in the cloud?

The demo also rais­es impor­tant ques­tions about util­i­ty; for exam­ple, why is see­ing a 3D mod­el of your food on a table bet­ter than see­ing a 3D mod­el in the web page you vis­it, or the app you down­load? Or, why is it bet­ter even than see­ing a reg­u­lar pho­to, or just read­ing the descrip­tion on the menu? Do you get more infor­ma­tion from see­ing a mod­el in AR than from any oth­er medi­um?

Matt Mies­niks’ essay, the prod­uct design chal­lenges of AR on smart­phones, details what’s nec­es­sary to make AR tru­ly use­ful, and it pro­ceeds from a very fun­da­men­tal basis:

The sim­ple ques­tion “Why do this in AR, wouldn’t a reg­u­lar app be bet­ter for the user?” is often enough to cause a rethink of the entire premise.

And a series of tweets by Steven John­son nails the issue with a lot of the demos we’re see­ing:

Again, I’m not set­ting out to crit­i­cise the demos; I think exper­i­men­ta­tion is crit­i­cal to the devel­op­ment of a new technology—even if, as Mies­nieks points out in a sep­a­rate essay, a lot of this exper­i­men­ta­tion has already hap­pened before

I’m see­ing lots of ARK­it demos that I saw 4 years ago built on Vufo­ria and 4 years before that on Layar. Devel­op­ers are re-learn­ing the same lessons, but at much greater scale.

But plac­ing 3D objects into phys­i­cal scenes is just one nar­row facet of the greater poten­tial of AR. When we can extract spa­cial data and infor­ma­tion from an image, and also manip­u­late that image dig­i­tal­ly, aug­ment­ed real­i­ty becomes some­thing much more inter­est­ing.

In Matthew Panzarino’s review of the new iPhones he talks about the Por­trait Light­ing feature—which uses machine learn­ing smarts to cre­ate stu­­dio-style photography—as aug­ment­ed real­i­ty. And it is.

AR isn’t just putting a vir­tu­al bird on it or drop­ping an Ikea couch into your liv­ing room. It’s alter­ing the fab­ric of real­i­ty to enhance, remove or aug­ment it.

The AR demos we’re see­ing now are fun and some­times impres­sive, but my intu­ition is that they’re not real­ly rep­re­sen­ta­tive of what AR will even­tu­al­ly be, and there are going to be a few inter­est­ing years until we start to see that revealed.

A note on Twitter’s latest feature

Twit­ter made a change to their algo­rith­mic time­line recent­ly, and have start­ed show­ing tweets from strangers, that are liked by the peo­ple you fol­low. I don’t know why, or what ben­e­fit they offer, or even what cri­te­ria is used; I pre­sumed at first that they’re show­ing tweets that have a good num­ber of replies, retweets, or likes, in an effort to sur­face qual­i­ty con­ver­sa­tions.

Good morning everyone. Grape soda is an abomination

But there are many which are replies to spe­cif­ic tweets, telling me noth­ing about the con­ver­sa­tion or con­text they were used in. (Note that I’m not crit­i­cis­ing the tweets them­selves, just why Twit­ter thinks they’re valu­able enough to show me.)

Some are so wild­ly out of con­text as to appear non­sen­si­cal, kind of like lines from a Dadaist poem.

Some are quite reveal­ing of the tweeter’s psy­che.

A few seem so per­son­al that, although they’ve been post­ed on a pub­lic chan­nel, the tweet­er may not have thought they’d be seen by a wider audi­ence.

But what it seems to mas­sive­ly over-index for is peo­ple lik­ing tweets that have thanked them or praised them.

To be fair, they’re not all total­ly with­out some amuse­ment val­ue; every now and then you get some­thing that’s fun­ny because of the con­text in which it appears.

But most­ly, they’re of lit­tle to no worth. There are occa­sion­al — once, maybe twice, a week — inter­est­ing or use­ful tweets that get sur­faced, but they’re heav­i­ly in the minor­i­ty. I can see what Twit­ter are try­ing to do with this fea­ture, but at the moment it’s just unwel­come noise in my time­line.

Any­way, I don’t like to sim­ply crit­i­cise with­out being con­struc­tive, so I’d like to offer a solu­tion to fix it. Here’s a mock­up of a sim­ple tog­gle to let peo­ple choose whether or not they want to see these tweets:

You’re wel­come, Twit­ter.

Trends in digital media for 2017

Alright, stand back every­one: I’m about to have some opin­ions about tech­nol­o­gy in 2017. Because obvi­ous­ly there’s been a short­age of those.

As part of my Tech­nol­o­gist role at +rehab­stu­dio I put togeth­er inter­nal brief­in­gs about dig­i­tal media, con­sumer tech­nol­o­gy, where the dig­i­tal mar­ket­ing indus­try could go in the near future, and what we should be com­mu­ni­cat­ing to our clients. Not try­ing to make pre­dic­tions, but to fol­low trends.

This arti­cle is based on my lat­est brief­ing. It’s some­what informed, pur­pose­ly skimpy on detail, and very incom­plete: I have some thoughts on adver­tis­ing and pub­lish­ing that I can’t quite dis­til yet, and machine learn­ing is a vast sur­face that I can bare­ly scratch.


If for noth­ing more than press cov­er­age, 2016 was the year of mes­sag­ing, and the explo­sion of the mes­sag­ing bot. The biggest play­er in the game, Facebook’s Mes­sen­ger, launched their bot plat­form in April, and by Novem­ber some 33,000 bots had been released. Recent tools added to the plat­form include embed­ded web­views, HTML5 games, and in-app pay­ments.

The first six months of bots were large­ly the ‘fart app’ stage, but there are signs that brands and ser­vices are final­ly start­ing to see the real oppor­tu­ni­ties in mes­sag­ing: remov­ing fric­tion from their users’ inter­ac­tions with them. Fric­tion in app man­age­ment and UI com­plex­i­ty, for exam­ple.

The same removal of fric­tion is also a key dri­ver behind the growth of home assis­tants and voice inter­ac­tion, like Alexa. Remov­ing the UI abstrac­tion between users and tasks is a clear trend. As an illus­tra­tion, com­pare two user flows for watch­ing Stranger Things on Net­flix on your TV; first using a smart­phone:

  1. Unlock phone.
  2. Find and open Net­flix app.
  3. Press the ‘cast’ but­ton.
  4. Find ‘Stranger Things’.
  5. Play.

Now using Google Home:

  1. OK Google, play Stranger Things from Net­flix on My TV.”

Home assis­tants make the smart home eas­i­er to man­age. No more sep­a­rate apps for Wemo, Hue, Nest, etc; a sin­gle voice inter­face (per­haps glued togeth­er with a cloud ser­vice like IFTT) con­trols all the dif­fer­ent devices in your home.

Mes­sag­ing and voice are vis­i­ble aspects of the trend towards the inter­face on demand:

The app only appears in a par­tic­u­lar con­text when nec­es­sary and in the for­mat which is most con­ve­nient for the user.

While native mobile apps are still a growth area, it’s becom­ing much hard­er to get users to down­load and engage with apps out­side of a small pop­u­lar core. This is espe­cial­ly true for retail, where con­sumers are more omniv­o­rous and like to browse wide­ly.

Improve­ments in the capa­bil­i­ties of web apps (espe­cial­ly on Chrome for Android) sug­gest an alter­na­tive to native apps in some cas­es. This has been demon­strat­ed by the suc­cess of new web apps from major retail brands like Flip­kart and Ali Baba in devel­op­ing economies where an offi­cial app store may not be avail­able, or net­work costs may make app down­loads unde­sir­able.

Web apps require no instal­la­tion, avoid­ing the app store prob­lem. They’re start­ing to get impor­tant fea­tures like push noti­fi­ca­tions and pay­ment APIs. And mes­sag­ing plat­forms, with their large installed user base, pro­vide the web with a social and dis­tri­b­u­tion lay­er that the brows­er nev­er did:

Mes­sag­ing apps and social net­works [are] wrap­pers for the mobile web. They’re actu­al­ly browsers… [and] give us the social con­text and con­nec­tions we crave, some­thing tra­di­tion­al browsers do not.

So it may be that for some brands, a web­site opti­mised for per­for­mance, engage­ment, and shar­ing, along with a decent mes­sag­ing and social strat­e­gy, will offer a bet­ter invest­ment than native apps and app store mar­ket­ing. Patag­o­nia already closed their native app. Gart­ner pre­dict that some 20% of brands will fol­low by 2019:

Many brands are find­ing that their mobile apps are not pay­ing off.

The most impor­tant app on your phone could be the cam­era, which will be increas­ing­ly impor­tant this year. First, by reveal­ing the ‘dark mat­ter’ of the inter­net: images, video and sound. So much of this data is uploaded every day, but with­out the seman­tic val­ue of text, it’s mean­ing is lost to non-humans — like search engines, for exam­ple. But machine learn­ing is becom­ing very good at under­stand­ing the con­tent of this opaque data, mean­ing the role of the cam­era changes:

It’s not real­ly a cam­era, tak­ing pic­tures; it’s an eye, that can see.

It can see faces, land­marks, logos, objects; hear back­ground chat and music. That’s under­stand­ing con­text, loca­tion, pur­chase his­to­ry, and behav­iour, with­out being explic­it­ly told any­thing. This is why Face­book, through Mes­sen­ger and Insta­gram, are furi­ous­ly copy­ing Snapchat’s best fea­tures: they want their young audi­ence and the data they bring.

Will it be intru­sive? Yes. Will it hap­pen? Yes. I’ve tried to avoid mak­ing hard pre­dic­tions in this piece, but I am as con­fi­dent as I can be that our image and video his­to­ry will be used for mar­ket­ing data.

Cam­eras will also be impor­tant in alter­ing the images that are shown to the users. Aug­ment­ed real­i­ty is an excit­ing tech­nol­o­gy, although good-enough ded­i­cat­ed hard­ware is still a while away. But there’s a def­i­nite mar­ket drift in that direc­tion, and lead­ing it is Snapchat: they’re stealth­ily intro­duc­ing AR through mod­i­fy­ing the base lay­er of reality—first, by alter­ing faces using their lens­es. This isn’t friv­o­lous; it’s expand­ing the range of dig­i­tal com­mu­ni­ca­tion, like emo­ji do for text.

If peo­ple are talk­ing in pic­tures, they need those pic­tures to be capa­ble of express­ing the whole range of human emo­tion.

Recent Snapchat lens­es have start­ed alter­ing voic­es, and your envi­ron­ment. They’ve recent­ly bought a com­pa­ny that spe­cialis­es in adding 3D objects into real envi­ron­ments. With Spec­ta­cles they’re not only remov­ing fric­tion from the process of tak­ing a pho­to, they’re pro­to­typ­ing hard­ware at scale. This is the road to AR. Snap Inc. want to be the cam­era com­pa­ny — not in the way that Nikon was, but in the way that Face­book is the social com­pa­ny.

The com­pan­ion to an aug­ment­ed real­i­ty is a vir­tu­al one, but I don’t believe we’ll see VR going main­stream in 2017—and I say that as a pro­po­nent. It’s sta­t­ic, iso­lat­ing, and it requires peo­ple to form a new behav­iour. It’s inter­est­ing to see cre­ators exper­i­ment with the form, and I’ve no doubt that we’ll see some very inter­est­ing expe­ri­ences launched this year. But domes­tic sales aren’t huge, and high-end units are too expen­sive, and low-end not quite up to scratch yet. Still think it will be big for gamers, though.


I have more. A lot more. But I think it will all be bet­ter explained in a series of sub­se­quent blog posts, so I’ll aim to do that. In the mean­time, would love to hear your thoughts, argu­ments, objec­tions, and con­clu­sions.