Concerns about the weaponisation of artificial intelligence (AI) have increased with the news that the Pentagon is ploughing $2 billion into the technology over the next five years.
The Defense Advanced Research Projects Agency (DARPA), part of the US Department of Defense (DoD), is investing the sum in a bid to make the technology more widely accepted by the military.
The AI Next programme was unveiled by agency director Dr Steven Walker on Friday, during closing remarks at DARPA’s D60 Symposium at the Gaylord Resort and Convention Center in National Harbor, Maryland.
The event celebrated 60 years of the organisation.
“With AI Next, we are making multiple research investments aimed at transforming computers from specialised tools to partners in problem-solving,” said Dr. Walker.
“Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible.”
As a result, Walker said that the agency wanted “to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognise new situations and environments and adapt to them.”
The announcement focused on how the AI Next programme will explore innovation in internal functions, such as automating critical DoD business processes.
These include: security clearance vetting; accrediting software systems; improving the reliability and resilience of AI systems; enhancing the security of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms, such as “explainability” and common sense reasoning.
DARPA is running over 20 development programmes that are designed to advance the state of the art in AI, pushing beyond second-wave machine learning techniques and towards contextual reasoning capabilities.
It is also running over 60 active programmes applying AI in some capacity, from agents collaborating to share electromagnetic spectrum bandwidth to detecting and patching cyber vulnerabilities.
A key component of the AI Next programme will be DARPA’s Artificial Intelligence Exploration (AIE) program, announced in July 2018.
“In today’s world of fast-paced technological advancement, we must work to expeditiously create and transition projects from idea to practice,” said Dr Walker.
Internet of Business says
Fears about the weaponisation of AI have been lurking beneath the surface of the industry for some years. Indeed, it was the reason for the recent employee rebellion at Google, forcing the company to quit the Pentagon’s Project Maven programme, which is using AI to analyse drone footage for potential targets.
Earlier this year, we reported on how self-driving vehicles will experience widespread adoption in the military first – or so the Pentagon believes.
But according to a statement from DARPA on Friday, the agency is keen to stress its consumer and ethical credentials. It helped usher in a second wave of AI machine learning technologies in the 1990s, it said.
The agency believes its funding of natural language processing, problem solving, navigation and perception technologies has “led to the creation of self-driving cars, personal assistants, and near-natural prosthetics, in addition to a myriad of critical and valuable military and commercial applications”.
This second wave of AI technologies are dependent on large amounts of high-quality training data, do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results. This is why DARPA is now funding what it calls “accelerating the third wave” of AI.
But not all of its activities are so benign, despite the careful messaging. Also last week at D60, for example, DARPA announced the development of a brain-computer interface that could enable a human being to control everything from a swarm of drones to an advanced fighter jet using nothing but their thoughts.
“As of today, signals from the brain can be used to command and control… not just one aircraft but three simultaneous types of aircraft,” said Justin Sanchez, director of DARPA’s biological technology office.
“The signals from those aircraft can be delivered directly back to the brain so that the brain of that user can also perceive the environment. It’s taken a number of years to try and figure this out.”
Additional reporting: Chris Middleton.
- Read more: US army develops battlefield AI for troop safety
- Read more: US soldiers to get miniature personal reconnaissance drones
- Read more: Underwater robotics company agrees swarm-bot deal with US Navy
- Read more: Research: NASA to explore Mars with swarm of robot bees