Over two-thirds (70 percent) of the American public fear artificial intelligence’s impact on employment, reveals a new study from digital company Syzygy.
The impact of AI on our day-to-day lives has been a hot topic in IoT news of late. The wider public are aware that AI, robotics and automation are combining to shake up the world in which we live.
These are justifiable concerns that tabloid newspapers love to feed, with proclamations that the rise of robotics signals the end of humanity – especially when the likes of Stephen Hawking and Elon Musk weigh in on the subject.
It’s important, therefore, that a more considered discourse takes place, which measures and responds to public concerns and questions, as well as taking steps to protect the people and economies affected by emerging technologies, when necessary.
A report from digital agency Syzygy, led by Dr Paul Marsden, has gauged the prevailing attitudes of the American people, in a paper titled Sex, Lies and A.I.
In an attempt to clarify a term that marketing has rendered virtually meaningless, Syzygy defines AI simply as, “technology that behaves intelligently, using skills we normally associate with human intelligence, including the ability to hold conversations, learn, reason and solve problems”.
AI hopes & fears
Most of the participants appear open to AI playing a greater role in their lives, particularly in how they interact with businesses and brands – with over two thirds accepting the idea. However, there is widespread scepticism about the benefits of AI technology, with 88 percent of Americans believing that AI in marketing should be regulated by an ethical code of conduct.
The report concludes that businesses employing automated solutions must communicate the practical and personal benefits of AI, stressing how it will make people’s lives easier.
This comes with the caveat that we want to know when an AI is being used. Eighty-seven percent of the American public supports a new ‘Blade Runner law’ that makes it illegal for AI applications such as social media bots, chatbots and virtual assistants to conceal their identity and pose as humans.
This is despite the fact that we generally prefer AIs to reflect human emotions and appearances. ‘Conscientiousness’ was elected the most important personality trait for an AI application, conveying the sense of dependability, dutifulness and efficiency.
According to the report, the emotions evoked by AI are mixed – the most dominant being: ‘interested’ (45 percent), ‘concerned’ (41 percent) and ‘skeptical’ (40 percent). Many people (52 percent) in the US believe AI technology is already influencing their lives; 41 percent remember seeing AI in the media in the last month; and 55 percent use a virtual assistant such as Siri or Alexa.
AI’s impact on employment
While participants, generally hope that AI will make their lives easier, there are fears that job automation will have repercussions for employment in the US (with 30% labelling it as their top fear). Those surveyed also predict that over one-third (36 percent) of their current job duties could be replace by AI in the next five years.
The report also revealed strong support for ‘LAWS’ – lethal autonomous weapon systems, popularly referred to as ‘killer robots’.
“Seventy-one percent of Americans believe that this AI technology should be permitted in armed conflict. This US sentiment stands in stark contrast to the call for an outright ban on these weapons by Elon Musk, Neuralink CEO and chairman of OpenAI, along with over 100 leaders in AI research.”
The American public seems generally open to the adoption of AI in their day to day lives. However, the report highlights the desire for greater regulation and transparency in how artificial intelligence is employed by businesses, particularly when it comes to the potential for AI to mislead and manipulate. This feeling of helplessness is epitomised in the participants fears around AI’s impact on employment.
The ethical challenges of AI
Even when we accept emerging AI technologies, difficult ethical questions remain. The survey raises a moral conundrum that has been increasingly debated since the advent of autonomous cars: how should the AI react in the split seconds before an accident? Syzygy’s report presents the dilemma like this:
“The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, it will almost certainly kill five children. The only way to save them is to steer you off the cliff to your certain death. What should the car do?”
It’s a difficult moral position and any answer will need to be pre-programmed into the vehicle. Mercedes-Benz execute Chistoph von Hugo revealed to Fortune Magazine last year that it’s autonomous cars will save the car’s drivers and passengers, even at the expense of pedestrians’ lives.
Given that only 30 percent of Americans would travel in a car programmed to minimize fatalities, even at the expense of its own passengers, it’s a seemingly impossible marketing situation. Yet these are the sorts of questions that businesses must answer if they are to provide the clarity the American public is demanding and reassure them that a future with AI is one that stands to benefit humankind more widely than they fear.