Google might be taking another tilt at messaging

I have a the­o­ry. Yes, anoth­er one. This time it’s about Google, and how I think they’re tak­ing anoth­er bite at the mes­sag­ing apple. And if I’m right, I think they have a bet­ter chance of suc­cess than pre­vi­ous efforts.

tl;dr: I think Google are going to use some of their biggest exist­ing prop­er­ties to launch their third wave of mes­sag­ing.

Con­tin­ue read­ing “Google might be tak­ing anoth­er tilt at mes­sag­ing”

On the iPhone X’s notch and being distinctive

I’ve been think­ing about the ‘notch’ in the iPhone X. In case you’ve no idea what I’m talk­ing about, the X has an ‘all-screen’ design; the  home but­ton is gone, and the front of the device no longer has bezels above and below the screen except for a curv­ing indent at the top which holds image sen­sors nec­es­sary for the cam­era and the new facial authen­ti­ca­tion fea­ture.

It seems some­how like a design com­pro­mise; the sen­sors are of course nec­es­sary, but it feels like there could have been a full-width nar­row bezel at the top of the device rather than the slight­ly odd notch that requires spe­cial design con­sid­er­a­tion.

But my thought was: if they chose a full-width bezel, what would make the iPhone dis­tinc­tive? Put one on the table face-up next to, say, a new LG or Sam­sung Galaxy phone, how could you tell, at a glance, which was the iPhone?

Two rows of icons for smartphone functions, using an outline that looks similar to an iPhone
icons from the the noun project

The iPhone’s sin­gle but­ton design is so dis­tinc­tive that it’s become the de fac­to icon for smart­phones. With­out it, the phone looks like every oth­er mod­ern smart­phone (until you pick it up or unlock it). The notch gives the X a unique look that con­tin­ues to make it unmis­tak­ably an Apple prod­uct, even with the full-device screen. It makes it dis­tinc­tive enough to be icon­ic, and to pro­tect legally—given Apple’s liti­gious his­to­ry, not a small con­sid­er­a­tion.

Of course it requires more work from app design­ers and devel­op­ers to make their prod­ucts look good, but Apple is one of the few (per­haps only) com­pa­nies with enough clout, and a devot­ed fol­low­ing, to put in the extra work—you can’t imag­ine LG being able to con­vince Android app mak­ers to put in the extra shift in that way. So per­haps its still some­what of a design kludge, but it’s a kludge with pur­pose.

Augmented reality demos hint at the future of immersion

Twit­ter is awash with impres­sive demos of aug­ment­ed real­i­ty using Apple’s ARK­it or Google’s ARCore. I think it’s cool that there’s a pal­pa­ble sense of excite­ment around AR—I’m pret­ty excit­ed about it myself—but I think that there’s per­haps a lit­tle too much ear­ly hype, and that what the demos don’t show is per­haps more sug­ges­tive of the gen­uine­ly excit­ing future of AR.

Below is an exam­ple of the demos I’m talk­ing about — a mock­up of an AR menu that shows each of the indi­vid­ual dish­es as a ren­dered 3D mod­el, dig­i­tal­ly placed into the envi­ron­ment (and I want to make clear I’m gen­uine­ly not pick­ing on this, just using it as an illus­tra­tion):

This rais­es a few ques­tions, not least around deliv­ery. As a cus­tomer of this restau­rant, how do I access these mod­els? Do I have to down­load an app for the restau­rant? Is it a WebAR expe­ri­ence that I see by fol­low­ing  a URL?

There’s so much still to be defined about future AR plat­forms. Ben Evans’ post, The First Decade of Aug­ment­ed Real­i­ty, grap­ples with a lot of the issues of how AR con­tent will be deliv­ered and accessed:

Do I stand out­side a restau­rant and say ‘Hey Foursquare, is this any good?’ or does the device’s OS do that auto­mat­i­cal­ly? How is this bro­kered — by the OS, the ser­vices that you’ve added or by a sin­gle ‘Google Brain’ in the cloud?

The demo also rais­es impor­tant ques­tions about util­i­ty; for exam­ple, why is see­ing a 3D mod­el of your food on a table bet­ter than see­ing a 3D mod­el in the web page you vis­it, or the app you down­load? Or, why is it bet­ter even than see­ing a reg­u­lar pho­to, or just read­ing the descrip­tion on the menu? Do you get more infor­ma­tion from see­ing a mod­el in AR than from any oth­er medi­um?

Matt Mies­niks’ essay, the prod­uct design chal­lenges of AR on smart­phones, details what’s nec­es­sary to make AR tru­ly use­ful, and it pro­ceeds from a very fun­da­men­tal basis:

The sim­ple ques­tion “Why do this in AR, wouldn’t a reg­u­lar app be bet­ter for the user?” is often enough to cause a rethink of the entire premise.

And a series of tweets by Steven John­son nails the issue with a lot of the demos we’re see­ing:

Again, I’m not set­ting out to crit­i­cise the demos; I think exper­i­men­ta­tion is crit­i­cal to the devel­op­ment of a new technology—even if, as Mies­nieks points out in a sep­a­rate essay, a lot of this exper­i­men­ta­tion has already hap­pened before

I’m see­ing lots of ARK­it demos that I saw 4 years ago built on Vufo­ria and 4 years before that on Layar. Devel­op­ers are re-learn­ing the same lessons, but at much greater scale.

But plac­ing 3D objects into phys­i­cal scenes is just one nar­row facet of the greater poten­tial of AR. When we can extract spa­cial data and infor­ma­tion from an image, and also manip­u­late that image dig­i­tal­ly, aug­ment­ed real­i­ty becomes some­thing much more inter­est­ing.

In Matthew Panzarino’s review of the new iPhones he talks about the Por­trait Light­ing feature—which uses machine learn­ing smarts to cre­ate stu­dio-style photography—as aug­ment­ed real­i­ty. And it is.

AR isn’t just putting a vir­tu­al bird on it or drop­ping an Ikea couch into your liv­ing room. It’s alter­ing the fab­ric of real­i­ty to enhance, remove or aug­ment it.

The AR demos we’re see­ing now are fun and some­times impres­sive, but my intu­ition is that they’re not real­ly rep­re­sen­ta­tive of what AR will even­tu­al­ly be, and there are going to be a few inter­est­ing years until we start to see that revealed.

A note on Twitter’s latest feature

Twit­ter made a change to their algo­rith­mic time­line recent­ly, and have start­ed show­ing tweets from strangers, that are liked by the peo­ple you fol­low. I don’t know why, or what ben­e­fit they offer, or even what cri­te­ria is used; I pre­sumed at first that they’re show­ing tweets that have a good num­ber of replies, retweets, or likes, in an effort to sur­face qual­i­ty con­ver­sa­tions.

Good morning everyone. Grape soda is an abomination

But there are many which are replies to spe­cif­ic tweets, telling me noth­ing about the con­ver­sa­tion or con­text they were used in. (Note that I’m not crit­i­cis­ing the tweets them­selves, just why Twit­ter thinks they’re valu­able enough to show me.)

Some are so wild­ly out of con­text as to appear non­sen­si­cal, kind of like lines from a Dadaist poem.

Some are quite reveal­ing of the tweeter’s psy­che.

A few seem so per­son­al that, although they’ve been post­ed on a pub­lic chan­nel, the tweet­er may not have thought they’d be seen by a wider audi­ence.

But what it seems to mas­sive­ly over-index for is peo­ple lik­ing tweets that have thanked them or praised them.

To be fair, they’re not all total­ly with­out some amuse­ment val­ue; every now and then you get some­thing that’s fun­ny because of the con­text in which it appears.

But most­ly, they’re of lit­tle to no worth. There are occa­sion­al — once, maybe twice, a week — inter­est­ing or use­ful tweets that get sur­faced, but they’re heav­i­ly in the minor­i­ty. I can see what Twit­ter are try­ing to do with this fea­ture, but at the moment it’s just unwel­come noise in my time­line.

Any­way, I don’t like to sim­ply crit­i­cise with­out being con­struc­tive, so I’d like to offer a solu­tion to fix it. Here’s a mock­up of a sim­ple tog­gle to let peo­ple choose whether or not they want to see these tweets:

You’re wel­come, Twit­ter.