A new type of earbud that can recognise facial expressions is under development. Future possible uses include providing control of a smartphone to people with impaired movement.
The technology has been developed by researchers at Fraunhofer Institute for Computer Graphics Research in Rostock and the University of Cologne in Germany.
It is based on nothing more sophisticated than an Arduino, the open source hardware/software combo designed to provide a low-cost way for people to learn about and experiment with computing.
Ear we go
EarFieldSensing, or EarFS for short, described in an academic paper by the researchers, relies on detecting changes to the shape of the ear canal and other effects caused by facial movement. Data received by the earbud are converted into instructions which are delivered to a smartphone.
For example, when we smile, it isn’t only the muscles around our mouth that move. Muscles in the ear move, too. A sensor attached to the earlobe detects these movements as electrical field changes. These can be interpreted as a specific instruction for a phone to carry out.
The developers say that the current version of the system, which is at the prototyping stage, can detect five expressions with 90 percent accuracy: smiling, winking, turning the head to the right, opening the mouth and saying ‘shh’.
“Something as simple as answering a call with a facial expression could be possible soon,” inventor Denis Matthies from the Fraunhofer Institute told the New Scientist.
Beyond the lab
The researchers acknowledge that in commercial use the system would need to be able to include other variables beyond just detecting expressions themselves. For example, a smile could be interpreted as an instruction to answer a call only if the phone is actually ringing.
There is potential for EarFS to be used across a range of different scenarios. With smartphone makers constantly looking for the next big thing to make their phones stand out, perhaps an intelligent earbud that shaves seconds off the time needed to check a text message, listen to a voicemail or answer a call without bothering to reach for the phone could be a selling point. In many respects, this is just a logical extension of the Bluetooth hands-free kits that have been around for years.
Indeed, Sony’s Xperia Ear is already in the market and offers a wide range of ‘remote control’ features for phones. Where it differs from EarFS is that its primary control mechanism is voice, though it does also understand when a user nods their head. Adding the ability to pick up facial expressions brings more potential than the spoken word alone.