IBM Watson launches bias detection system for popular A.I. platforms

IBM Watson launches bias detection system for popular A.I. platforms

Cognitive services giant IBM has announced a new artificial intelligence (AI) ‘Trust and Transparency’ service, which it claims gives businesses greater insight into AI decision-making and bias.

The new Watson-based cloud service is designed to not only ‘open the black box’ of complex AI systems, but also to reinforce organisations’ trust in their own AI-based decisions – and data – by showing the workings.

In this way, IBM also seeks to reinforce its status as a trusted provider and service arbiter, even of others’ technologies.

IBM’s new Trust and Transparency capabilities, built on the IBM Cloud, work with a variety of popular machine learning and AI frameworks, including Watson itself, Google’s Tensorflow, Apache Spark MLlib, AWS SageMaker, and Microsoft’s Azure Machine Learning.

The cloud service can be programmed to monitor the unique “decision factors” of any business workflow, enabling it to be customised to the specific needs of each organisation, says IBM.

Importantly, it also exposes the decision-making process, and detects bias in AI models at runtime – as decisions are being made – capturing potentially unfair outcomes as they occur. More, it can recommend data to add to the model to help mitigate any bias it has detected.

In addition, IBM Research will release into the open source community an AI bias detection and mitigation toolkit, which includes a suite of tools and educational material to encourage global collaboration in addressing bias in AI.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies,” said Beth Smith, general manager of Watson AI at IBM. “Now it’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making.”

Strategic importance

Tackling bias is of strategic importance to IBM as it seeks to be a trusted provider of cognitive and data services. In June, the company announced that it would make public two datasets to be used as tools by the AI research community.

The first will be made up of one million annotated images, harvested from photography platform Flickr. The dataset will rely on Flickr’s geo-tags to balance the source material and reduce sample selection bias.

According to IBM, the current largest facial attribute dataset is made up of just 200,000 images.

IBM is also releasing an annotated dataset of up to 36,000 images that are equally distributed across skin tones, genders, and ages. The company hopes that this will help algorithm designers to tackle bias in their facial analysis systems. Ethnic and gender biases are known issues with some face recognition systems – as explored in this detailed analysis.

In a blog post outlining the steps the company will be taking this year, IBM Fellows Aleksandra Mojsilovic and John Smith highlighted the importance of training development teams – which tend to be dominated by young white men – to recognise how bias occurs and becomes problematic.

Internet of Business says

The question for most organisations is not that an AI or machine learning system is biased by deliberate design, but whether the training data has introduced unconscious, cultural, or historic biases into the system, effectively casting prejudices or assumptions of any kind into code.

Another challenge is confirmation bias, in which organisations either use or design systems to prove what they already believe, weighting the data towards pre-defined conclusions.

There are numerous other forms of cognitive bias, which often affect business decision-making without leaders being aware of them – one of them, ironically, is ‘pro innovation bias’. Some of the most common are explored in this graphic.

A valuable system, then, which is as much a gain for IBM as it is for its customers.

IBM’s latest developments come on the back of recent research by its own Institute for Business Value, which reveals that while 82 percent of enterprises are considering AI deployments, 60 percent fear liability issues and 63 percent lack the in-house talent to manage the technology with confidence.

Chris Middleton
Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.