GUEST BLOG – Jason Mesut – Tech Beyond The Screen: UX & Design For Connected Products

On Wednesday 2nd March 2016 we held our third event in the Tech Beyond The Screen series, around UX & Design for Connected Products. Jason Mesut was one of our speakers who discussed the challenges around creating UX for IoT technologies, specifically without screen based interfaces. Jason, has put together this write up of the content he covered during his talk.

 

“I recently talked through this presentation at The Digital Catapult Centre Brighton’s interesting event Tech Beyond The Screen: Connectivity & Infrastructure.

I tried to give a more User Interface (UI) perspective on the whole UX around Internet of Things subject. I suppose people expected me to talk about lots of cool future stuff like gesture, voice and AI. And I did. However, mostly, I criticised some of these technologies.

Here I touch on a few of the key points.

It’s easy to criticise touch screens

I have delivered a fair few presentations over the past few years that criticise the ever-popular touch screen. Obviously, since the growth of the iPhone and the ability to develop apps for it, we have seen a lot of our digital world change. We can get taxis on-demand, we have video chats across the world, and we can access rich information in the bathroom. A big screen of lots of pixels, that we can interact with directly, gives people a lot of flexibility. However, what is the touch screen’s superpower is its own kryptonite. This flexibility causes us huge issues with core interactions such as typing text, no matter how much fuzzy logic and clever band aids we develop for it. Hard keys are frankly just easier. We took one on the chin for the other benefits of having an infinite supply of interconnected services at our fingertips. And I don’t really have a huge problem with the Smartphone. It’s more how the touch screen has become the prevailing interaction paradigm of all consumer electronics, and even cars.

Drivers shouldn’t touch screens

I’m not going to beat around the bush for this one. I fundamentally don’t believe that drivers should have access to a touch screen while driving their vehicles. Passengers, possibly. Bringing their own smart phones or tablets into their vehicles? At their own discretion. Touching a piece of glass in a particular place to perform a task that used to be managed through a physical controller, is a huge step back and a dangerous step forwards. Most auto manufacturers are considering doing this if not doing it already. Top of the criticism list has to be Tesla’s infamous 17 inch touch screen where heating controls are hidden behind menus. I have to say that Tesla’s UI is improving.

UI is more than GUI

I have to say that I am getting sick and tired of people going on about No UI, Zero UI, Invisible Interfaces and Natural Interfaces. Most of the time, the proponents of these tweetable ‘trends’ are often bright, considered individuals trying to get more work in that space. Some of them people that I have great respect for. And fair play to them. However, most of the time, they are playing on our obsession and misunderstanding that all User Interfaces (UI) are Graphical User Interfaces (GUI), and that does a disservice to their credibility and to those that develop UIs for a living. That has actually never been the case, and never will be, so let’s move on and expand our repertoire of what a User Interface is: a mediator between a person and a system, that helps us control it. Good UIs help us understand the system state, to then understand what is going on, have an intention, and then possibly an action to perform on the given state through that UI. The system feeds back through the UI and things go round and round again until we have no need to interact.

If we can appreciate that UI > GUI, what is worse about these No and Natural UI arguments is that they assume that interfaces shouldn’t necessarily require conscious attention and explicit intention. I guess there are examples where this is the case, but far too many examples, where this is a terrible idea. Too often an accidental or natural action could be confused with an intended action for use with the system in question. Having ‘natural’ interfaces that work by relating to natural gestures can easily cause confusion with the explicit intention to perform an operation and an involuntary action that was associated with some other intention (like scratching your butt, or you waving to a friend).

We’re attracted to magic like magpies

To quote Arthur C Clarke, ‘’Any sufficiently advanced technology is indistinguishable from magic’ and we love a bit of magic don’t we? Opening a car with our Smartphone, controlling a screen with our hands waving in the air. It’s so cool, you’ve gotta have it. You can almost imagine the excitement when an engineer pulls this off for crowds.

And that’s when you should worry. Because for every ‘wow, that’s so cool’ there’s often a ‘but what happens if…’ naysayer who often doesn’t get listened to. But we need to listen to those folks because, dependent on the application, bad things could happen and we have to design technology for different likely contexts.

Unfortunately, the truth is, this sort of magical technology does sell. Initially, anyhow.

I’m not even going to begin to go into all the counter-examples around this, but instead point you to one of my favourite videos around cool technology failing in the real world. The Curious Rituals video by Nicola Nova of The Near Future Laboratory sums it up perfectly, exposing some of the awkwardness, and annoyance of Voice UI, Retina scanning, Free space gestures and more.

The best future interfaces are around us now

I tried to make the point that you can classify most User Interfaces down to a few different categories — knobs, switches, handles, levers and buttons.

The legendary, Bill Verplank, boils this down to just two: handles and buttons, where handles offer continuous control and buttons offer discrete control saying that the most effective interfaces have both.

Most of the interfaces we have in our homes aren’t going to change, and we’re familiar with them, so I see a lot of potential in using more of their characteristics in the future interfaces of tomorrow.

Knob gags and the power of graspable precision control

Sometimes to make a point I feel the need to skate close to the edge of political correctness. Sexual innuendos around my obsession with knobs are my favourite. But I only do this to make an important point around the power of knobs. Sometimes I forget to actually make the important points of why they are good, though.

Knobs, or rotary controls, are a great interface control.

They allow us to make subtle and precise changes to volume; scroll through lists; control both vertical and horizontal movements within a short amount of space; and allow us to interact without looking. It’s why for so long, temperature controls in cars used them extensively.

The Nest thermostat is a really big knob in some ways as the whole of the unit seems to rotate and give a satisfactory clicks as the temperature increments. It feels good to turn it.

The BMW iDrive controller offers drivers speeding down the Autobahn at 150+mph to access a song without having a shaky finger trying to locate a specific place on a screen almost an arm’s length away. It’s safer this way.

The classic Texas Instruments concept dashboard by London HQed Design consultancy Native uses physical knobs in front of a rear projected screen to allow access to functions without looking.

To save ourselves some time right now I’ll try to cover knobs in detail elsewhere, but if you really want to indulge, I can’t recommend the knobfeel tumblr site highly enough.

More physical-digital interfaces will emerge

As we get more fluent in touch, gesture, display technologies and more I hope that we can fuse the best of the touch screen with the best of physical controllers. The ‘digital’ folks (UI, UX, IA, Devs) will need to work with the ‘physical’ folk (industrial designers, mechanical engineers) and the commercial people to develop interfaces that better fit the needs of the people and the needs of the business. I’m not naive though. This is bloody hard. And there is very little solid rationale other than ‘it’s just better’ right now, as people are magpies and cool tech sells.

We now have choices, as the ‘Understanding Industrial Design’ O’Reilly book says, and some people will create better products as a result.

We are seeing evidence of some good combinations of both physical and digital. Unfortunately, so many of these appear in the music tech world.

For example, Ableton’s Push 2 uses a screen, plenty of knobs, light-up buttons and touch strips to allow people to take their eyes away from their laptops when performing or recording music.

And, one of the more interesting advances in interface technology can be found at the core of the Roli Seaboard. A piano-like array of what Roli calls ‘keywaves’ provides people with the ability to manipulate notes in multiple dimensions.

Vibrato, pitch bend, velocity sensitivity, pressure sensitivity and more are combined to give musicians the flexibility of an orchestra. This is an interface technology that blends buttons and handles in one and it can be truly magical. Sinking your fingers in to the silicone key waves is an odd, but amazingly gratifying experience. Hearing the effects on sounds, when done right, is just mind blowing. It opens up many ideas in this designer’s head at least anyway.

We need better ways to work between these worlds

As we play around with physical interfaces and combine them with digital services, we need to think more about how we manage the distribution of inputs, outputs and controls — connecting the UI with the services and systems processes.

I was a big fan of LittleBits when they emerged and they keep producing more and more kits that help people understand both analogue and digital electronics. And there’s more and more prototyping toolkits coming out all the time that offer the promise of physical prototyping.

But it’s Sam Labs who interest me the most. Another London start-up, their kits are wireless by default. Little bits (sorry) of UI and sensors that you orchestrate within a desktop application. It’s more accessible in some ways than other kits I’ve seen — my mind was blown within 2 minutes of playing!

But, being wireless by default, helps designers, engineers and product people to better realise ideas that don’t require the physical infrastructure constraints we are so used to, and forces a reliance on flaky wireless protocols and power issues that we are surrounded by and likely to experience for years to come.

I have various decks around the Future of interfaces and Bridging the physical-digital divide online, and you can always watch the videos here and here for even greater effect.

No rights reserved by the author.

UXInteraction DesignUI

 

Thank you to Jason from all at the Digital Catapult Centre Brighton for your participation in the event and for your interesting thoughts in this follow up!

Leave a Reply