Why voice is key to the future of the smart home

SharpEnd’s Cameron Worth looks at how voice activation will be the driving force as the Internet of Things (IoT) makes its way into the smart home, retail stores and even in airports.

Ubiquitous computing is fast becoming a reality. The growing number of voice-activated devices is the single biggest step toward living in a completely connected environment. How can we expect our experiences in and out of the home to change with voice interaction, and what are the key points for building these experiences?

Here at SharpEnd, we started our discussion based on our experiences developing for voice interaction. During which we highlighted two key points to address before any development began – social environment and service layers.

Social environment

The problem with voice is that it’s hard to be discrete; it’s an open form of communication, which can present real problems particularly regarding user adoption. Users may feel intimidated at the idea of having to speak out loud to an inanimate object, particularly in public where others can hear.

I can’t help but have the same reaction to people who talk to their Siri in public as I did to those who wore those flashing Bluetooth earpieces in the noughties. We concluded that for voice activation to work properly the user must have a moment of public isolation or a hands-free service.

To clarify, public isolation is an environment in which the user is not surrounded by people, which might make them feel self-conscious.

The home is an example of public isolation; a tourist information desk is not. However, this is a grey area as public environments can be differentiated. Imagine a voice-activated elevator where one would say the floor to which the elevator should travel. Although often filled with strangers, the interaction between the user and the elevator is a commonality between all users and as such is more acceptable.

Service layers

During our development process (see ‘product spotlight’ section) we identified the type of interactions that work best in a voice-activated environment. These are what we call ‘micro-services’. For example, this could be requesting the music to be turned on or turning off the lights.

Micro-services are bite-sized interactions that can be executed faster with the spoken voice than interacting with a smart screen device. It’s also important to note that the returned information is usually short – such as an instruction or confirmation a service has been executed. Pinpointing micro-service opportunities in spaces, such as retail, will be important to finding key engagement points with voice.

Building on these insights we highlighted key micro-service areas with public isolation to provide examples of how voice can be integrated in the future.

Voice in the smart home

The home automation market is the space in which voice-activation will really come into its own. It’s an ideal environment as there are a variety of third party devices within the home that will offer multiple touch-points for micro-service interactions. Smartphone apps and devices such as Nest and Sonos are already addressing these micro-services. Voice activation however will help streamline these interactions into a single format.

Finally of course, within the home the user is ‘isolated from the public’ meaning that the home is the perfect environment for users to adopt the notion of voice activation.

Interactions with smart home devices such as lights, the TV, and even door locks can all be voice-activated. As such, a single command could change the entire house environment, e.g. Saying ‘I’m going to bed’, could trigger the door to lock, the lights to switch off, and any other A/V appliances to turn off.

As the voice-activated home-automation market gains traction, user confidence will grow and the smart home will fast become reality. We can expect to see voice integrated into more public spaces such as those listed below.

You might like to read: John Lewis opens IoT department store in Oxford Street

Voice in retail

The retail environment offers many scenarios where users may find themselves alone and requiring assistance in some form or another. The most unique use cases we found came from the fitting room, often customers find themselves in the changing room realising they actually need one size smaller or would prefer an item to be a different colour. The process of finding a new item will result in customers leaving the shop, but what if voice activation could provide a concierge service to have these clothes brought to the changing room?

Voice-activation can be incorporated with displays and other technologies to provide the information necessary for shoppers. For example, an RFID reader could register new clothes being worn to return suggestions. The customer might ask  “What will go well with this dress?”, the fitting room voice and mirror might respond  “I think the XXX V-neck jumper will you suit better with the XXX dress.”

Moving further, how might voice activation assist when trying to find a product in store, or can only identify a product by its description?

Voice in the airport

The airport environment is a unique moment to engage with passengers, emotions can run high, the process can be stressful, and often there’s a lot of time to kill. This provides lots of opportunities to improve the passenger experience. Whether that’s from the moment passengers arrive at the terminal, to the retail environments, the lounges or to the actual airline itself.

Moments of isolation such as the airport lounge, where passengers may have their own booths, concierge services could be offered to bring extra class to the experience. For example, requesting food or drink without having to get the attention of a waiter.

Amazon Echo leads the smart home wave

Many voice assistants already exist, but none have gotten close to realising the possibilities of voice-activated spaces like the Amazon Echo. The Echo is a standalone personal voice assistant similar to a Siri, Cortana or Google Now but is separate to a smartphone. It differs from other voice assistants such as Apple’s Siri + HomeKit in that it offers easy web integration for third party devices.

Recently we got our hands on a couple of Echo’s, and started work on our first client project driven entirely by voice. The developer framework allows for API and database integration making engagement with third-party devices straightforward. Currently the skills (apps) developed for the Echo are limited, most returning just spoken information or music services. Having used the Echo for only a few days though, it’s clear this form of computer interaction will go well beyond its current abilities.

In summary

Voice activation will be the biggest step in ubiquitous computing, this is more than just a Siri or Cortana service, and it’s already started in the smart home. As more companies enter the home automation market to streamline the multiple devices like Nest and Sonos, we can expect voice-activated technology become increasingly more advanced and spill over into new environments.

Cameron Worth is the founder of London-based IoT agency SharpEnd, whose clients include Absolut Vodka, Unilever, Beiersdorf and Pernod Ricard.

You might like to read: Consumers open to living in connected homes