As connected devices like the Nest thermostat, Amazon Echo, and Fitbit become more and more commonplace, traditional interfaces like screens become less practical. Interfaces that weed out as much unnecessary effort as possible are really just seeking to achieve what any UX champion is striving for: a delightful, usable experience that solves a problem users have.


  • The real problem with the interface is that it is an interface. Interfaces get in the way. I don’t want to focus my energies on an interface. I want to focus on the job. I don’t want to think of myself as using a computer, I want to think of myself as doing my job. -Don Norman


Natural UI (NUI) is the ability to interact with a machine using nothing but the human body. This is often manifest as invisible or hidden interfaces sometimes referred to as “zero UI”. It refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment. It often implies a screen-less, invisible user interface where natural gestures trigger interactions, similar in how we would communicate to another person.


NUI is powered by touch, by gestures, by sound, by senses.  It is aligned with our human nature and our inherent ability to learn the nature of these interactions. NUI takes advantage of these existing skills e.g., speaking, listening, and gestures. The advantage of designing a NUI is that it uses a common human skill so you do not have to think as much about different user groups. You can assume that most of your users have the skill simply because they are human.


Voice interaction is the ability to speak to your devices, and have them understand and act upon whatever you’re asking them. Device manufacturers of all shapes and sizes are seeking to integrate voice capabilities into their offerings since the introduction of Siria on the Apple iPhone, the launch of Amazon’s Echo using Alexa as the voice assistant, and more recently Google Home offering the same style of voice interaction. Voice is poised to impact UX design, just as mobile touchscreens re-focused web design around a responsive/adaptive mobile first strategy. Far from being limited to screen-based interactions, the transformation of voice interaction will potentially permeate every aspect of users’ lives; from the automated home, self-driving cars, to the world of retail, travel, and entertainment. Voice represents the new pinnacle of intuitive interfaces that democratize the use of technology.


InterContinental Hotels recently tested Amazon Echo device (with Alexa Voice Service built in) in selected Crowne Plaza hotels. Guests could verbally ask Alexa where the closest pizza place was, what time the hotel gym opened, or if their flight home was running late. No screen, keyboard or mouse needed.

AgVoice, demonstrated a solution for Agricultural inspectors to input data into a mobile device, using natural speech, while using her hands to inspect plants which otherwise would have meant them stopping the inspection to type the input on a screen or keypad, often in hot, dusty environments with glaring light from the sun.

In Asian markets, Citi is using voice biometrics so that customers can speak their name instead of entering their password on a screen to access their account.


As we use NUI to talk to our machines — not with commands, menus and quirky key combinations — but using our own human language, requires advances in Natural language processing, which has taken major steps forward in recent years with accuracy levels in speech recognition of 96%, but there are wide variations with different dialects, and the accuracy will need to be closer to 99.9% to be truly natural. Applications will also need to  have the ability to learn, allowing interactions to become more effective and accurate the more they area used. To be fully effective the system needs to know more about us. Contextual awareness, using sensors and data, can provide more about us and what is going on in our world in order to create the experiences we really need.


Voice is only one part of the interaction story, the audio response to our command, or use of sound to relay important information is another potential to augment the interaction experience. The human sense of hearing allows us to filter and focus in on sounds. Whilst having a conversation in a busy room, we can filter out others and focus on the person we are speaking to. Our primary means of communicating with each other is through sound. The HEXA concept from Design Partners demonstrates how we can go a step further and Augment Sound as a way of computing. In the industrial space HEXA provides a personal intelligent safety device that protects your ears, filters harmful sounds, but also amplify useful sounds, monitors your health and alertness whilst also monitoring your environment, alerting you to potential hazards, threats, and giving timely/contextual feedback from devices and mobile apps to enhance productivity, simply by not interrupting a worker by having to interact with a screen. For the consumer there are several smart ear-buds on the market like the Bragi Dash, HearPlus Hear One, Sony Xperia Ear, or Samsung Gear IconX, which are offering a vision of the interactive experience shown in the movie 'Her' where the character has NUI conversations with the AI computer via an in-ear device. There are other iconic NUI visions brought to life in movies such as Minority Report, Iron Man, and 2001 Space Odyssey.


- When considering a NUI you should allow novice users to learn and progressively see how to use the user interface, laying out a clear learning path for users; progressive learning.

- An NUI should imitate the user’s interaction with the physical world by having a direct correlation between user action and NUI reaction; direct interaction

- If users find interaction with an interface difficult, their mental effort or cognitive load is high. Users should not have to keep thinking about how to manipulate the interface, but instead to focus on achieving a task; cognitive load


Consideration for gesture interaction is more important, since gestures lack critical clues deemed essential for successful human-computer interaction. As gestures are ephemeral, they do not leave behind any record of their path, which means that if one makes a gesture and either gets no response or the wrong response, there is little information available to help understand why. It will be important to standardize, either by a formal standards body or simply by convention, to provide the same level of affordance icon, symbols, and menus do with GUI interaction.


Hidden UI does not always means no screen, adopting principles like anticipatory design where we use available data to anticipate what customers want to do next, and finding out at much as you can about your users during their initial interactions can save them time and the mental effort needed to provide known information in the future is part of the NUI experience.


The vision of the future is one where a person doesn’t need a computer, a phone, a keyboard, a mouse. Where they don’t have to know how to find what they’re looking for and tap, click and type to get there. Where technology and analytics have advanced to know what a customer wants in advance, and quickly and efficiently deliver it right to them. “The last best experience anyone has becomes their expectation for every experience from then on.” What provides a “wow factor” today will be “business as usual” tomorrow.











I am passionate about innovative design and creating user experiences at the intersection of art, science and technology.



Quantified Self

Are Games Fun?


Submitting Form...

The server encountered an error.

Form received.


© 2016 Copyright CYXperience