Scroll

Insights and News /

21 November 2018 3 min read

The ethics of AI part 1: An introduction

Nathan Burns

Nathan Burns
Manager | Technology Transformation

The capabilities of artificial intelligence (AI) – the simulation of intelligent behaviours in computers, usually an algorithm or model trained with data through machine learning – are expanding and maturing extremely fast. Apple's Siri was released in October 2011 as a virtual assistant to help people get simple tasks done just by asking. Seven years on, Google showcased Duplex, an AI technology capable of accomplishing real-world tasks over the phone – for example, making a restaurant booking or a hair appointment – all while sounding surprisingly similar (perhaps deceptively so) to a human. The ethical questions and debate surrounding the demo of a technology that appeared to be human later led Google to announce that Duplex would in the future declare itself a robot, rather than potentially deceive someone on the other end of the line.

Google’s Duplex demo is just one example of AI capabilities sparking debate on the ethical use of technology. With data and algorithms providing an increasingly pivotal role in the way organisations operate through underpinning a greater breadth and depth of products and services, scrutinising the ethical usage of technology will be fundamental for society in ensuring the successful and fair advancement of AI. Credit scoring, job applications, and criminal justice systems are all areas where models are already in use and AI is being explored today. These are areas where decision-making has the potential to significantly impact people’s lives – much more so than, for example, booking a hair appointment on the wrong day of the week.

AI decisions are generally held to a higher standard than those made by a human; AI cannot justify or explain itself in the same way a human can (regardless of whether a particular human decision is fair or accurate). AI is also criticised for the potential to reinforce existing biases based on historically-biased data – though this implies that algorithms could influence the outcome for the better through a series of unbiased decisions. Addressing how to identify complex bias in data and algorithms is one of the challenges inherent in the complex, and sometimes hidden, nature of how machine learning works.

The availability of data has been a key driver in the growth of AI, and organisations need to consider to what extent they will make use of this. Should, for example, a financial services organisation consider the credit worthiness of an individual’s social network as part of a credit application? Should an energy supplier consider the risk of a person’s lifestyle inferred through phone and app activity for an energy supply contract? Data like this might be available and not covered by existing regulations, but organisations will need to decide if it would be ethical to make use of it.

In the next parts of this blog series we will explore AI ethics in terms of the UK government’s strategy and initiatives, a look into AI ethics in the private sector, and wrap up with some suggestions and recommendations for how organisations can start to tackle this particular challenge.