We may be stumbling towards Brexit but there is no denying that the UK Government is racing towards data and AI, which promises to be pervasive, with impacts and ramifications in health, economics, security and governance. It is also one of thefour grand challenges in the UK’s industrial strategy, with £1bn of public and private funding secured to date.
But it is also a truth universally acknowledged that trust underpins a strong economy, and trust in data underpins a strong digital economy. But trust is slow to earn and quick to lose. The application of data-driven and AI-based technologies, not necessarily the technologies themselves, have resulted in Facebook’s Cambridge Analytica Scandal and DeepMind’s Royal Free, as well as claims of discriminating algorithms. The recent SYZYGY Digital Insight Report identified that 85% of the UK public is calling for stronger data and AI governance. But questions abound: What should this look like? Who should be responsible for it? And how can we identify both the positive and negative consequences of AI on customers?
In response to the recent Lords Select Committee on AI, the government has set up an AI Council for UK AI strategy, the Government Office for AI – to drive coordination and implementation of AI in government, and a world leading Centre for Data Ethics and Innovation, chaired by Roger Taylor (chair of Ofqual), to provide a set of normal, rules and structures of how applications of data and AI should be used to benefit society.
So, what do we know about this centre so far?
- First things first, it is not another regulatory body. It has no regulatory powers
- It is independent, it will act as an advisor for government on data and AI governance issues
- It will represent the government in the international debate raging about data and AI
- It will act as a formal body to gather industry, researchers (e.g. the Alan Turning institute), regulators and the public together to solve specific issues.
And, what we don’t know, but we hope will be clarified with the results of the Public Consultation due in November:
- Its priorities of work – it has outlined six areas of focus; targeting, fairness, transparency, liability, data access and intellectual property and ownership, but it is unclear what priority projects the Centre should aim to deliver in two years
- Whether it will have a statutory footing, and therefore any authority
- What it means by promoting data and AI ‘best practice’ and how it plans to establish this
- What support sector-specific regulators (i.e. FCA) will have in order to work through the impact of AI on their sectors?
- How much input business will have vs. regulators, academia and the public and what are the mechanisms of engagement?
The UK Government is right to lead from the front. Maximising the benefits and minimising the risks for customers and society to ensure the safe and equitable development of AI is going to be difficult balancing act. On the one hand, recognising that “ambiguous goals”, coupled with broad transparency requirements, have encouraged firms to prioritise consumer privacy Some are advocating a similar approach in AI, where strong transparency requirements promote innovation, yet force accountability. But this may require the new Centre for Data Ethics and Innovation to practice what it preaches; to be as transparent with the public and businesses about how it carries out its functions and the recommendations that it makes to government and regulators, as is the expectation of AI applications themselves.