Scroll

Insights and News /

12 December 2018 3 min read

The ethics of AI part 4: How to tackle it

Nathan Burns

Nathan Burns
Manager | Technology Transformation

Rosanna Saffell

Rosanna Saffell
Manager | Banking

In parts 1, 2 and 3 of this blog series we covered some of the ethical challenges both government and private sector need to consider when using artificial intelligence in decision making.

Both algorithms and data can be biased; AI needs a sufficient level of control and governance to ensure it is used ethically:

  • AI needs to be auditable – the context and justification for specific decisions needs to be determinable and interpretable even when the underlying models are complex or not transparent
  • AI needs human oversight – this doesn’t mean a human is needed for every decision (i.e. ‘human-in-the-loop’), but that there are appropriate levels of governance and control with increasing levels of human involvement depending on the risk and gravity of decisions made – model governance frameworks applied in banking can provide a reasonable starting point
  • The data that an AI relies on needs to be proven, high quality, appropriately sampled (to prevent selection bias) and take into account historical human bias to avoid perpetuating this bias in automated decision making
  • The design of an AI solution needs to consider what kinds of data (particularly individual, non-personal data) it will use and which it will exclude; just because data can be acquired doesn’t mean it should be exploited for the benefit of an organisation or other third parties.

A few suggestions to help address the ethical challenges of AI:

  • Governments and commerce should collaborate to develop a framework for explaining processes, services and decisions delivered by AI to improve transparency and accountability. This should be part of the design process, incorporate a level of documentation in plain language and require the use of tools like Skater (or similar) to provide technical transparency
  • Organisations could willingly undertake external, independent audits of AI solutions and associated underlying data to ensure these operate fairly and ethically while protecting relevant IP. This could include assurance of the design and development process, analysis of the machine learning model and training, tagging / watermarking data based on level of quality and bias or analysis of result accuracy from an ethical perspective
  • The behaviour of an AI should not be left solely to the software engineers and data scientists who are focused on creating AI capabilities. This does not necessarily imply a ‘Head of Ethics’ or similar taking accountability for AI decisions; rather that the inclusion of a rounded and diverse set of viewpoints is embedded in the design process of any AI solution
  • Ethical consideration should be baked into the design process and methodology for AI solutions, providing a formal opportunity for discussion and challenge at an appropriately early stage – this should be made visible at an organisational level, e.g. through pledges, commitments or acquisition of independent standards.

Finally, one of the most influential solutions to ethical AI is culture. While less tangible than a controls framework or a process, culture is key in terms of how people behave, and has proven highly influential in challenging the right use of technology in recent times e.g. open letters from staff at companies like Microsoft and Google to their respective leadership teams. The right culture of using data and algorithms will improve the likelihood that an AI solution will be fair as designed from the start, rather than as enforced by regulation.