THE BIG READ A new report cautions that the Internet of Things may undermine the concept of personal privacy to such an extent that even our private thoughts and conversations may belong to advertisers. But what can the industry do about it? Chris Middleton offers his own analysis.
The Internet of Things will expand the constant data collection practices of the online world into the offline one, according to a new report on the data privacy implications of connected technologies.
This will enable and normalise preference and behaviour tracking offline, it says, to such an extent that “the very notion of an offline world may begin to decline”.
The detailed, 150-page document, Clearly Opaque: Privacy Risks of the IoT (2018), has been published by the Internet of Things Privacy Forum. The international nonprofit organisation produces guidance, analysis, research, and best practice for industry and governments on reducing privacy risks through responsible innovation.
The qualitative shift in people’s lives brought about by smart devices, homes, offices, spaces, and cities needs urgent attention, warns its report, because of the extreme diminishment of private spaces that will result.
“The scale and proximity of sensors being introduced will make it harder to find reserve and solitude,” explains the report. “The IoT will make it easier to identify people in public and private spaces.”
“Privacy is a social and collective value,” it continues. “If privacy protects people’s capacities to participate in democracy, then it confers benefits on society as a whole. The preservation of privacy must therefore be enacted at social levels, and not be left exclusively to the domain of individual people and how they experience it.
“The way we discuss privacy, the way we employ it to govern information, and the power it holds, and the way we encode it into formal and informal policy instruments have direct bearing on the kind of society we collectively create.”
Even people’s inner space will be breached by the IoT, believes the organisation. “The IoT will encroach upon emotional and bodily privacy. The proximity of IoT technologies will allow third parties to collect our emotional states over long periods of time. Our emotional and inner life will become more transparent to data-collecting organisations.”
The report cites the rise of emotion detection technologies (via facial data, biometrics and voice analysis), sentiment analysis, and ‘affective computing’ – computing that relates to, arises from, or influences emotions. In China, shoppers are already paying for goods with their smiles, as this separate external report explains.
In late 2017, a consumer advocacy group published research on patents secured by Google and Amazon relating to the future functions of their digital assistants. In one of these, Amazon patented a method for extracting keywords from ambient speech, which would then trigger targeted advertising. A person might say, “I love skiing,” and then be served relevant ads, even though the words were spoken to another person, and not to the virtual assistant.
In another patent, Google describes a smart home in which “mischief may be inferred based on observable activities” by children. (See below for more on how children are directly at risk, according to the report.)
“It seems highly likely that companies will continue to expand into emotion and sentiment observation, gaining ever more access to what lies below our public behaviour and speech,” says the report. “Given the likelihood of ubiquitous data collection throughout the human environment, the notion of ‘privacy invasion’ may decompose; more so as people’s expectation of being monitored increases.”
It is therefore crucial to continue to introduce privacy approaches that assert control over devices and data flows, says the report.
Internet of Business recently reported on how, among other things, ride-hailing giant Uber is seeking a patent for AI systems that monitor passengers for signs of alcohol and drug-taking.
Throwing stones at glass houses
The home is itself in danger of becoming a “glass house”, continues the wide-ranging privacy report, one that is “transparent to the makers of smart home products”.
The challenge is that most consumer IoT applications are predicated on inviting these devices, and the privacy invasions they bring, into our lives, it says. However, the ability to know who is observing us in our private spaces may cease to exist as a result.
The report cites a number of legal precedents to reinforce its point about the sanctity of the home: that it isn’t a vague concept, but a core legal principle.
In the seminal US Supreme Court case, Kyllo v US – in which police used a thermal imager without a warrant to detect heat patterns from marijuana cultivation in the defendant’s house – Justice Scalia said: “In the home… all details are intimate details, because the entire area is held safe from prying government eyes… We have said that the Fourth Amendment draws a firm line at the entrance to the house.”
The 2001 case specifically concerned law enforcemen’s ability to breach the home boundary, but it is illustrative of the legal and cultural sanctity of the walls of the home, explains the report. This week, MIT released details of an AI system that can “see through walls” and infer the location of people from data.
In Europe, this sanctity is embodied in Article 7 of the Charter of Fundamental Rights of the EU: “Everyone has the right to respect for his or her private and family life, home and communications.” The rights of children are afforded similar, specific protection under UN regulations (see below).
Two US states, Arizona and Washington, guarantee privacy in their constitutions, with both citing the home as a critical context: “No person shall be disturbed in his private affairs, or his home invaded, without authority of law.”
However, connected devices are designed to be unobtrusive, so people can easily forget that they are being monitored in their home, work, travel, and other environments, says the report.
In this way, IoT devices blur regulatory boundaries, meaning that privacy governance in general – and sector by sector in the economy – becomes “muddled” as a result. Market shifts towards ‘smart’ features that are intentionally unobtrusive lead to less understanding of data collection – and less ability to decline those features as a direct consequence.
As more and more products are released with IoT-like features, there will be an erosion of choice for consumers – in other words, reduced ability to avoid having ‘things’ in their environment that constantly monitor them.
A surveillance culture?
In this way, the IoT “retrenches the surveillance society, commodifies people, and exposes them to manipulation”, adds the report. More, it makes the concept of gaining meaningful consent more difficult.
As a result, the IoT is “in tension with the principle of transparency”, it continues. “The IoT threatens the participation rights embedded in the US Fair Information Practice Principles and the EU General Data Protection Regulation (GDPR).
“IoT devices are not neutral; they are constructed with a commercial logic encouraging us to share. The IoT embraces and extends the logic of social media – intentional disclosure, social participation, and continued investment in interaction.”
Children will be particularly affected by these technologies and the accompanying challenges, warns the report: “The IoT will have an impact on children, and therefore give parents additional privacy management duties. Children today will become adults in a world where ubiquitous monitoring by an unknown number of parties will be business as usual.”
As touched on above, the right of children to be protected from intrusive monitoring and surveillance is enshrined under Articles 13 to 17 of the UN Convention on the Rights of the Child, which cover their right to privacy and free association.
What can we do about it?
With the risk of personal information breaches, identity theft, “autonomy harm”, diminished user participation in decisions, violation of privacy expectations, technology’s encroachment on emotional privacy, and loss of private spaces, the report appears to suggest that there are few upsides of the IoT when it comes to personal privacy, as a culture of constant surveillance emerges.
And it’s worth adding to the report’s findings that many citizens are actively embracing a surveillance culture, as they retreat into homes protected by smart security cameras and doorbells, home safety monitoring apps, neighbourhood security networks, and even private police forces (whose prime function appears to be to protect wealthy neighbourhoods from drunks, beggars, and homeless people – as explored in this recent Internet of Business report).
So what can be done to minimise the risk of real societal damage from connected technologies and the IoT?
Having broad, non-specialist social conversations about data use, collection, effects, and socioeconomic dimensions is essential to help the public understand the technological changes around them, says the report. Privacy norms must evolve alongside connected devices – and discussion about them is essential.
Human-Computer Interaction (HCI) and Identity Management (IDM) are two of the most promising fields for privacy strategies within the IoT, explains the report. “A useful design strategy is the ‘least surprise principle’ – don’t surprise users with data collection and use practices.
“Analyse the informational norms of personal data collection, use, and sharing in given contexts. Give people the ability to do fine-grained selective sharing of the data collected by IoT devices.”
There are three major headings for emerging frameworks and strategies that address the need for greater IoT privacy, says the report: User control and management; notification; and governance.
User control and management strategies
The Internet of Things Privacy Forum recommends that, before collecting data, organisations should:
- Perform data minimisation. Only collect data for current, needed uses; do not collect it for future, as-yet-unknown uses
- Build in ‘Do Not Collect switches’, such as mute buttons or software toggles
- Build in wake words and manual activation for data collection – as opposed to always-on operation
- Perform privacy impact assessments to understand holistically what your company is collecting, and what would happen if there was a breach.
After collection, it recommends that organisations should:
- Make it easy for people to delete their data
- Make it equally easy to withdraw consent
- Encrypt everything to the maximum degree possible
- Ensure that IoT data is not be published on social media or indexed by search engines by default – users must be able to review and decide before publishing
- Raw data should exist for the shortest time possible.
In terms of identity management, the organisation recommends adopting a range of design strategies, such as:
- Unlinkability: Build systems that can sever the links between users’ activities on different devices or apps
- Unobservability: Build or use intermediary systems that are blind to user activity
- Give people the option for pseudonymous or anonymous guest use
- Design systems that reflect the sensitivity of being able to identify people
- Use selective sharing as a design principle
- Design for fine-grained control of data use and sharing
- Make it easy to ‘Share with this person, but not that person’
- Create dashboards for users to see, understand, and control the data that’s been collected about them
- Design easy ways to separate different people’s use of devices from one another.
In terms of notification strategies to better inform users of their privacy options and rights, organisations should test people’s comprehension of privacy policies, encourage users to think about their personal settings, and consider designing IoT devices to advertise their presence when users enter a space, it says.
And when it comes to governance, the report urges the creation of baseline, omnibus privacy laws for the US, analogous to the EU’s GDPR – rules that are being adopted informally by some US technology companies, such as Apple, Salesforce.com, SugarCRM, and Microsoft.
More, organisations should consider regulations that restrict IoT data from certain uses, and expanding the definition of personally-identifiable information to include sensor data in the US.
Internet of Business says
There are numerous benefits from the combination of the IoT, AI, wearable devices, robotics, 3D printing, autonomous vehicles, and more – benefits to our health and well being, sustainable use of energy resources, and more besides.
However, trust in data-gathering organisations and technologies has hit new lows in the wake of the Facebook/Cambridge Analytica scandal and related data breaches, hacks, and social engineering campaigns by hostile nations, troll farms, and fake social media accounts.
Meanwhile, we have also grown used to apocalyptic reports in the media about the potential impact of AI, robotics, and automation on employment, human rights, and more.
Yet this report is not the apocalyptic document it might appear to be, but rather a well-researched, reasonable, detailed, and – above all – evidenced plea for common sense, transparency, and clarity as we enter an era in which we risk giving up as much as we gain as a society, thanks to connected technologies.
One of the challenges is that we have also become accustomed to the greatest information resource the world has ever known, the internet and World Wide Web, being used for the basest of functions: flogging advertising at every turn, sometimes in the most persistent and irritating fashion.
Meanwhile, some organisations are rushing to implement AI, machine learning, automation, and more, for tactical, short-term reasons: to slash costs and ramp up profits for shareholders, often with scant consideration of the long-term strategic impacts for their businesses, the rights of consumers, or the potential impact on society or quality of life.
Meanwhile, the rise of facial recognition systems, emotion and sentiment recognition/analysis, and systems that constantly listen to us in our own homes – devices that are primed to sell us product – mean that our environments are becoming noisier, more intrusive, and – perhaps – freighted with risk if supposedly intelligent systems are wrong, biased, or make incorrect assumptions based on poor training data or simple technology failure.
One thing is clear: the report comes in the wake of many others on the lack of security in popular smart home devices, and a recent Which?/Consumers Association report on the “staggering” and largely invisible levels of what it calls “corporate surveillance” via smart devices. One example was a smart TV that, within a few minutes, sent data to 700 different IP addresses.
These are precisely the scenarios this report is designed to make visible: where is that data going, and why? What is being done with it, on whose authority, and under what regulations? And has the consumer any idea it is happening? Have they given their consent to it, or been offered any meaningful opportunity to make an informed decision to opt out or switch off?
Get the balance wrong, and people may indeed begin to switch off en masse from the IoT, or come to see it as an enemy, rather than a friend that wants to make the world safer, greener, healthier, and more sustainable.
We urge all organisations that have an interest in the IoT and connected society to download and read this report, and consider its findings carefully.
- Read more: UK, US authorities urged to stop police use of facial recognition
- Read more: Security: Why you should worry about unsecured IoT devices – Mozilla
- Read more: Alexa beware! New smart home tests reveal serious privacy flaws
- Read more: Amazon booms as high street withers. Next up: robots and insurance