A panel of independent reviewers, commissioned by DeepMind Health itself, has warned that DeepMind Health could exert “excessive monopoly power” in the healthcare and medical data space.
DeepMind Health was founded in 2016, after DeepMind, which had been acquired by Google in 2014, began working with the Royal Free London NHS Foundation trust.
At that time, the company revealed that it had commissioned a panel of independent reviewers to scrutinise its work each year, with a mind “to question not only what we are doing, but how and why we are doing it”.
However, DeepMind faced controversy in 2016 when it signed a deal with the NHS to obtain patient data to test its Streams medical analytics app – a move that was subsequently deemed to be illegal.
Now, the independent reviewers’ 2018 report raises new concerns about DeepMind Health, including the warning of it becoming a powerful monopoly.
In his forward to the report, outgoing chair Dr Julian Huppert says:
We have been clear from the outset that ‘good enough’ is not good enough for a company with such a close relationship to Google, a company which already reaches deep into all our lives. The issues of privacy in a digital age are if anything, of greater concern now, than they were a year ago and the public’s view of the tech giants has shifted substantially.
Dr Huppert references the recent Cambridge Analytica case and its misuse of vast amounts of Facebook user data. Those that wish to operate in the arena of healthcare data, he explains, should be held to much higher standards.
DeepMind Health check
DeepMind Health has pioneered research in AI diagnostics, including in the treatment of eye disease at Moorfields hospital. It also developed Streams, the mobile app that helps clinicians identify and manage patients with acute kidney injury (AKI), who are at risk of deterioration.
Despite these and other treatment advances, the company needs to do more to address worries over how it manages data and its business agreements, according to the report.
As well as addressing the areas of concern highlighted in last year’s report, the 2018 review investigates the business models of DeepMind Health and its relationship with parent company Alphabet, along with the human factors involved in its work, and evidence of clinical utility.
The independent reviewers have laid out a set of expectations for how an organisation such as DeepMind Health should operate, introducing 12 ethical principles against which to measure the company moving forward:
- Benefit to data providers
- Public, patient and practitioner engagement
- Design for safety and utility
- A model employer
- Legal and ethical
- Protecting privacy
- Reasonable profit
The move follows Google’s release last week of a set of guidelines and principles for ethical AI development, following widespread criticism of its involvement in defence contracts, which led employees and academics to accuse the company of pursuing a path that risked weaponising its technology.
Big tech ambitions
The most glaring concern is DeepMind Health’s apparent future monopoly in the space. The report says:
The tide of public opinion has turned strongly against the tech giants. They are seen as monopolies that do not play their fair part in society, whether it is paying enough taxes or keeping harmful content off their platforms. Their motives are now regarded with increasing suspicion.
DeepMind’s links with Google bring it under increased scrutiny. When considered alongside the personal nature of the data it handles, and its value to others, the question ‘Where are they making their money?’ is crucial.
The report reasons that, “even apart from its connections with Alphabet, [DeepMind Health] could find itself in a position of being able to exert excessive monopoly power”.
Health institutions could theoretically become locked into DeepMind Health services, even if it is no longer financially or clinically sensible to continue the relationship. In response to these concerns, the company has committed itself to the interoperability of its systems and the use of the Fast Healthcare Interoperability Resources (FIHR) open API.
However, DeepMind Health’s work also faces questions about scalability, in the face of a recent evaluation of the NHS Innovation Accelerator. The level of service disruption, the short and long-term payback (both clinical and financial) against the investment, and the amount of people required to implement DeepMind technology are all practical hurdles that the company needs to overcome.
Internet of Business says
Commissioning a yearly independent report with extensive access to your business is an unusual and brave arrangement for a commercial company (the report is aware of just one other similar example: the Yoti Guardian Council). The panel members are also not required to sign non-disclosure agreements and are able to commission external analysis and opinion.
Yet DeepMind Health is an unusual company in a sensitive position. It is in its own best interests to foster a sense of openness and trustworthiness in the public eye.
The clinical breakthroughs enabled by DeepMind Health and its partners are clear to see, and patient data is hugely powerful and vital to exploring the full potential of AI in healthcare. Nonetheless, DeepMind Health needs to be transparent about the way it does business and how it handles patient data.
While the company hasn’t shown any clear indications of planning to sell data, or use it to tailor advertising (its website says “data will never be connected to Google accounts or services”), the public will be left wondering just how it plans to make money, if indeed it plans to do so in the long term.
Either way, DeepMind needs to communicate to the public exactly where it is heading. In the post-Cambridge Analytica world, big tech companies can no longer get away with hoarding data behind mysterious or vague ambitions.
With the power it wields, DeepMind must be held to the highest possible standards.