Scroll

Insights and news /

05 December 2018 3 min read

The ethics of AI part 3: The private sector, big tech and commercial advantage

James Bearne

James Bearne
Senior Consultant, Technology Transformation

“We weren’t expecting any of this when we created Twitter over 12 years ago, and we acknowledge the real world negative consequences of what happened and we take the full responsibility to fix it.”

Testifying before the United States House Committee on Energy and Commerce, the CEO was talking about Twitter’s algorithms and content monitoring. With hundreds of millions of Tweets every day, Twitter relies on machine learning algorithms to help organise, rank and filter content by relevance – as well as to detect and minimise certain types of abusive and manipulative behaviour on the platform. Where decisions about data and content are being made by complex and opaque processes at such a scale, alarm bells ringing about harmful consequences to society are entirely justified.

What are private organisations doing to take responsibility for the ethical and social impact of this type of technology?

For starters, Google has defined and published objectives for its use of AI applications. Key themes being:

  • Used for social benefit
  • Avoid unfair bias
  • Built and tested for safety
  • Accountable to people
  • Incorporate privacy
  • Not to be used for technologies likely to; cause harm, facilitate injury, or contravene principles of international law or human rights.

Companies are also conducting research in the field. Amongst others, DeepMind has launched Ethics & Society a new research unit, Facebook is engaging the research community and collaborating with academia.

A key concern for AI is algorithmic bias and unfairness. If biased algorithms are used to automate critical decisions, there is a risk that the biases become automated and more difficult to spot. As the use of AI expands, developing tools to detect bias and unfairness in machine learning models will form a vital element of the AI toolkit to help build a trustworthy data ecosystem. Microsoft are doing just this with a bias-detection tool described as a dashboard software engineers can apply to trained AI models. Facebook has also been testing its own tool called Fairness Flow, using it to evaluate Facebook’s jobs algorithm by analysing the diversity of data used to train it.

Taking proactive steps to foster an ethical culture around the use of AI will help companies to avoid embarrassing backlash from employees, who can appear more determined to do the right thing than their employers. The potential monopolies of big tech companies is also a risk; given that Google, Amazon and Facebook have deep knowledge of people’s lives and the technology to exploit it, the organisations themselves have asked for regulation to among other things curb their own use of data.

Tackling these ethical challenges is important, simply because it is the right thing to do. However, there is also an opportunity for commercial advantage if organisations have the vision to build a business model with the ethical and transparent use of AI at the core. Drawing comparison outside of the tech industry, there are numerous impact investment firms building socially responsible funds – appealing to investors who believe wealth at any cost is simply too one-dimensional.

In a field as complex as AI, responding to the ethical and social challenges is easier said than done. Organisations need to collaborate with a diverse set of voices from across the data and AI ecosystem – coupled with ongoing critical inspection of their practices. In our fourth and final post in this series, we will look at some potential methods for organisations to tackle these challenges.