IoT security: New AI, ML, 5G, WingOS, satcom risks identified

IoT security: New AI, ML, 5G, WingOS, satcom risks identified

Last week was a bad one for the cybersecurity sector, according to security experts at the DEF CON and Black Hat conferences. Chris Middleton rounds up the latest batch of reports.

Cybersecurity companies’ reliance on artificial intelligence (AI) and machine learning is introducing new types of automated security risk, an industry insider has warned.

According to Raffael Marty, VP of corporate strategy at security firm Forcepoint, many cybersecurity firms are jumping on the AI bandwagon largely to attract the attention of corporate IT buyers that have bought into the hype around AI. As a result, some are rushing products to market based on AI systems that have been insufficiently trained.

“What’s happening is a little concerning, and in some cases even dangerous,” he said at the Black Hat cybersecurity conference in Las Vegas last week.

Marty explained that many new AI-powered cybersecurity products are centred on supervised machine learning – in other words, training data that needs to be tagged by human researchers, who label code as clean or as malware.

However, if hackers are able to gain access to a security firm’s systems, they could corrupt that data at source by switching labels so that some malware is tagged as clean code, he said.

Alternatively, they could work out which elements of their own code are being flagged as malicious in the training data and remove them.

Automated systems trained on those data sets may then pass the malicious code as clean – a problem inherent in any solution that relies on a single algorithm.

Marty suggested that buyers’ reliance on new, AI-powered security solutions in an age of hype could be a dangerous gamble.

Winging security in the air

Also at the end of last week, two senior consultants from cybersecurity firm IOActive shared the results of further research programmes in the connected security world.

On Thursday, Ruben Santamarta presented his own Black Hat talk, ‘Last Call for Satcom Security’, while on Sunday, Josep Pi Rodriguez gave a talk at DEF CON 26, entitled ‘Breaking Extreme Networks’ WingOS: How to Own Millions of Devices Running on Aircrafts, Government, Smart Cities and More’.

Their findings should ring alarm bells in many organisations.

Research published by Santamarta in 2014 described a number of theoretical scenarios that could result from the weak security posture of satellite communications products. Four years later, his Black Hat talk revealed how hundreds of in-flight aircraft, military bases, and maritime vessels are theoretically accessible to malicious actors via the vulnerable satcom infrastructure.

“The consequences of these vulnerabilities are shocking. Essentially, the theoretical cases I developed four years ago are no longer theoretical,” he claimed.

“To my knowledge, my Black Hat talk is the first public demonstration of taking control, from the ground and through the Internet, of satcom equipment running on an actual aircraft.”

Santamarta found that several of the largest airlines in the US and Europe have allowed their entire fleets to be accessible via the Internet, with hundreds of connections exposed.

Meanwhile, Rodriguez’s presentation highlighted critical vulnerabilities in Extreme Networks’ WingOS, which was originally created by Motorola.

The embedded operating system is found in many devices used by airlines, subways, hospitals, hotels, casinos, resorts, mines, smart cities, and sea ports, among others.

Some of the vulnerabilities identified by Rodriguez do not require any kind of authentication, he said, meaning that an attacker could exploit them via an open Ethernet or WiFi connection.

“Let’s put us in the New York City subway or in the aircraft scenario,” he said. “We know that normally these vulnerable devices running WingOS are connected to other assets of the internal network that are not normally reachable from the Internet.

“Let’s say that an attacker is able to exploit one of the vulnerabilities through the Wi-Fi or Ethernet network. Since the attacker now has code execution at the WingOS device, he can now pivot and try to attack these other assets inside the internal network of the New York City subway or in the aircraft.”

The rush to automated security

Given these vulnerabilities, it is hardly surprising that many organisations are turning to AI-powered solutions – despite the problems outlined above.

The rush to AI and machine learning in the cybersecurity industry is partly rooted in a numbers game. With Gartner forecasting that there will be 20 billion Internet of Things (IoT) devices online by 2020 – one billion of them in the US alone – the security industry’s hunger for fresh algorithms is insatiable.

Multiple reports have revealed the lack of security protocols in many popular smart devices, along with poor security practices among users. The outcome is an industry rushing to market with insecure devices, while buyers are turning to hyped solutions to fix the problem while ignoring basic security procedures: a toxic mix.

At the same time, there is a growing shortage of skilled cybersecurity workers within client organisations, making automated and/or AI-driven solutions seem increasingly attractive.

For example, a report published by the Ponemon Institute in May found that more and more IT security functions are understaffed.

According to that report, Staffing the IT Security Function in the Age of Automation, only 25 percent of respondents said that their organisations have no difficulty attracting qualified candidates, compared to 34 percent in 2013. Meanwhile, only 28 percent reported that their organisations find it easy to retain qualified candidates, compared to 42 percent in 2013.

Seventy-five percent of organisations now believe that their teams are understaffed – an increase of five percent on 2013.

GCHQ warns on 5G

Meanwhile, one of the UK’s most senior cybersecurity experts has warned that the introduction of 5G networks, alongside AI and the IoT, is ramping up the cybersecurity challenge even further, with China’s strong presence across all of these fields posing a potential national security threat.

Writing in the Sunday Times yesterday, GCHQ chief Jeremy Fleming said, “We have entered a new technological age, one that will fundamentally change the way we live, work, and interact with each other.

“This new digital landscape will transform lives and economies as data analysis, artificial intelligence, 5G, the IoT, quantum computing, and many other technologies still being developed permeate all areas of human endeavour.”

According to Fleming, these technologies “bring risks that, if unchecked, could make us more vulnerable to terrorists, hostile states, and serious criminals”, with many 5G technologies in particular coming from China.

“We must ensure that processes represent industry best practice so as to avoid real risk to the UK’s critical national infrastructure,” he continued. “We need to consider early, robust and fair solutions to the global challenge of balancing investment, trade, and security.”

Plus: Machine learning can identify programmers

In related news, researchers have developed a machine learning system that can reveal the identity of programmers, in both raw source code and compiled binaries.

The findings were presented at DEF CON 26 on Friday by Rachel Greenstadt, associate professor of Computer Science at Drexel University, and Aylin Caliskan, an assistant professor at George Washington University.

According to their report, with eight code samples apiece from 600 programmers, the system was able to identify creators 83 percent of the time.

The technology could be a boon in cases of IP theft and malicious hacking, but could also prevent some coders from contributing work anonymously to open source or collaborative programmes.

Internet of Business says

Another week bringing yet more warnings about escalating threats to cybersecurity – reports that offer few, if any, answers.

However, the observation about the hype cycle behind AI forcing both vendors and buyers alike to grasp at solutions is a good one, and it is worth stressing that any ill-considered rush to AI-based systems could result in security compromises.

So what can be done about it?

Internet of Business is committed to providing answers to security challenges, and a number of our recent reports have explored the core issues within technologies such as AI, the IoT, and 5G networks, with expert analysis of the best strategic and operational responses.

Here are links to just some of our recent reports:

Chris Middleton
Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.