Emotional A.I: Mei messaging assistant warns of personality changes
mei messaging - ai relationship assistant

Emotional A.I: Mei messaging assistant warns of personality changes

Malek Murison reports on how a new AI system, Mei, could help people understand each other better when they use technology to communicate. But is it as simple as that?

AI assistants are nothing new, but thankfully they’ve come a long way since Microsoft’s intrusive paperclip. The always-listening Alexa, Siri, Bixby, and Cortana – to name just a few – are slowly being embraced and integrated into our homes, offices, and lives.

In part that’s because expanding data sets are moving the intricacies of natural language communication within reach of digital assistants. One example is Google’s Duplex system, which is capable of making calls on our behalf to arrange appointments and reservations.

Duplex is narrow in its expertise, but the trend is clear: AI assistants are becoming more familiar with us and the ways that we communicate.

The natural conclusion of all this is also clear in terms of assistants’ capabilities, if not their application. There’s no reason to doubt that they will eventually become seamless conversationalists. Recent updates to Alexa, for example, were made with that explicit purpose.

But how we use that technology remains to be seen.

A messaging app with integrated AI

One company exploring the possibilities of human-AI interaction is Mei, a New York startup built on the premise that AI could actually teach us a thing or two about communicating with other people.

Google recently launched an AI feature in Gmail that suggests context-driven replies on behalf of its users. Mei’s messaging platform – currently in beta and ready for full launch in the next few weeks – goes further than that.

It includes an AI assistant that combs through users’ message histories to build personality profiles of them and their contacts.

Before a soft launch in August, the Mei team spent two years analysing millions of messages. They’ve combined natural language processing with personalised machine learning models to create an AI that, the company says, “understands its users”.

The result: Mei’s ‘relationship assistant’, which makes suggestions based on the age, gender, and personality traits of whoever a user is texting.

For example, if a contact’s messaging history implies that they are outgoing and spontaneous, Mei will suggest that you play things by ear when making plans.

It’s just a nudge in the right conversational direction, says the company. “This advice is meant to help users recognise where they may be most different from the people they chat with – so they can try harder to find common ground.”

Depending on your viewpoint, the tool either provides a much-needed social barometer or evidence of a vacuum where human empathy once existed. Should machines tell us how to talk to other humans? Should introverts be called out by robots?

“We hope intelligence like this will nurture empathy from within our users, which is important in effective communication,” says the startup. “Because the advice comes from our AI, there’s no fear of judgement by a person.”

Despite the implication that its users lack empathy, so far the feedback has been positive, according to founder Es Lee, a computer science graduate from Harvard.

“We’ve been surprised by how engaged users have been,” he tells Internet of Business. “It feels like we’ve opened up people’s minds to what’s possible with AI, especially when it’s designed to help them.”

And Mei has another intriguing feature: the ability to predict when something is wrong.

AI early warning systems

Predictive analysis is a growing trend in AI, when linked to enterprise asset management systems and digital twins. For example, on the factory floor and across a range of industrial environments, predictive analytics are being used to spot machinery or part failures before they occur.

This capability is based on, among other things, pattern recognition, an understanding of the events that usually lead to a problem, and vast data sets of previous incidents.

In this spirit, Mei is set to launch a new feature that will provide a similar service to its growing base of 40,000 users.

The company’s new ‘anomaly detection’ feature promises to let you know if one of your contacts is messaging in a manner that’s out of kilter with their normal behaviour.

The AI assistant can recognise a range of messaging patterns, from response times to emotional content. As a result, it will let users know when something doesn’t feel right.

“Let’s say a user is chatting with a friend who seemed very carefree over the last years’ worth of text conversations,” Lee explains.

“If suddenly this contact seems more negative and takes longer to respond, we can let the user know something’s different and encourage them to check in on their friend. You can think of this as a ‘guardian angel’ feature that alerts users of changes they might easily miss.”

Unless, of course, that friend is merely irritated by constant texts and doesn’t want to speak to the sender.

The exact details of the ‘something’ that’s wrong are beyond the assistant at the moment. But in the future – with enough data harvested and algorithms tweaked – Mei or similar assistants could potentially predict mental health issues or even suicidal tendencies.

It’s a possibility that has already been considered by the Mei team. “We always imagined Mei could potentially play a role in mental health research,” says Lee.

“In the early stages of developing our platform, we saw the potential and conducted some preliminary research, both internally and with mental health professionals. This is something we will be working on, so stay tuned.”

Challenges in the short term

It’s worth noting that the Mei messaging platform isn’t entirely focused on picking through intimate conversations and offering relationship advice.

The app also offers a range of features to take standard SMS messaging to the next level, such as self-deleting messages, end-to-end encryption, and the ability to un-send texts. The Holy Grail for angry, impulsive people, perhaps.

Though arguably the app’s biggest challenge will be winning back users who have long-since flocked to WhatsApp, Facebook Messenger, Telegram, and other messaging platforms. But that’s not to say that one of them might step in and make Mei an offer one day.

Unsurprisingly, Lee admits that interoperability with other platforms has been the most requested feature from users. “Unfortunately,” he says, “most messaging apps keep their messages closed off. We’re planning to partner with other apps to make integration more seamless.”

On the face of it, a messaging platform that needs to gain access to its users’ messaging history to perform its primary function could struggle in the new data landscape governed – in Europe at least – by GDPR. 

However, Lee suggests that recent changes to privacy regulations could actually help Mei work with the likes of Facebook and Whatsapp, by giving more control to users over their data.

Data security

In any case, Mei has taken several steps to protect users’ data, and the company makes clear that it doesn’t “take users’ messages hostage”.

“If users don’t want us to have their data, we make it easy to delete their accounts/data from our systems,” explains Lee.  

“We understand that text conversations may be some of the most private information that people have and have prioritised privacy and choice throughout our development. Mei doesn’t ask for personally identifiable information… so our system doesn’t know the identity of the user.”

That said, many users may include personally identifiable data in their messages, of course.

Any sensitive account information is encrypted by the company and even the telephone number needed for verification purposes is hashed to become a unique ID. Plus, the default setting of the app comes with the AI assistant turned off, so users have to explicitly give Mei’s AI assistant access to their messaging history. 

Internet of Business says

A launch with intriguing possibilities – and one that seems somehow inevitable. If the beta uptake is anything to go by, people are certainly open to providing access to their messages for a taste of AI relationship guidance.

And with features still to come – and a blockchain-based credit system that rewards users for feedback and data-sharing – it might not be long before Mei’s assistant grows a lot smarter. 

But it’s unclear if Mei will be an angel or a devil on users’ shoulders. It’s possible that Mei could put back the empathy often missing from text and email conversations, where it’s often easy to misread a sender’s mood.

But the flip side of that is an acknowledgement that empathy is being removed from human communication by technology – as if Twitter trolls and tabloid comments threads were not evidence enough of that.

Conceivably, Mei could have other uses, perhaps, such as warning a minor that the person they are talking to is an adult, and not another child. But equally, it might help such an adult talk to a child.

But for people who struggle to understand people’s emotions and moods, Mei could be a boon in helping them navigate the messy human world.

These are the complex challenges facing AI as it learns to understand us – or, at least, to recognise recurring patterns in our behaviour. Let’s hope that it brings people closer together, rather than deepens human divisions and biases.