Wayne Cleghorn, technology, data and artificial intelligence partner at Excello, recently gave an interview to Computer Weekly assessing the effectiveness of the UK’s current regulatory framework around artificial intelligence and how it compares to that of the European Union.
The article says that “the UK’s approach depends on multiple regulators working together coherently,” a potential challenge for the future as Wayne notes. He describes a reliance on disparate regulators as “an experiment in an uncontrolled environment” given the UK “has no known and proven model” for coordinating oversight of technologies evolving at this pace. The UK may also end up a ‘rule taker’ because “AI developed in the US, EU and China will already embed those jurisdictions’ laws and standards”.
Calling for more risk-based intervention by government and a longer-term nuanced strategy, Wayne says “some uses of AI – such as automated decisions determining liberty or critical healthcare outcomes – require a statutory definition of what is unacceptable. Leaving these decisions to voluntary principles or sector regulators risks long-term harm and undermines public trust.”
Read on for further thoughts from Wayne on this pressing topic.
Additional commentary
It is unclear whether the public will trust that AI market participants should be left alone to “set and mark their own homework” especially in the most sensitive areas such as healthcare, education, finance, law enforcement and the courts. This risks a loss of public trust and engagement.
The UK should be actively engaged in real time, with international discussions, and consider targeted mandatory obligations for AI developers. The UK should at least set out in law, at a minimum, what conduct, practices and uses of AI are unlawful. All kinds of relevant criteria can be found by looking at the work of international organisations (like the UN and the Council of Europe), other governments, international civil society, researchers and the wider global academic community.
The UK proposes a hands-off approach to legislating and regulating AI, which did not work for social media, when similar hands-off approaches were adopted. There are lessons there for AI.
Social media platforms now fully occupy the public and political space, providing news, commentary and live broadcasts of the most sensitive information without the rigours of broadcast and print media regulations. In recent years, there have been calls for social media regulation especially in the areas of online child safety, disinformation, misinformation and the conduct and outcomes of elections (democratic participation). Meaningful and acceptable rules have proven much more difficult to establish because social media has matured and become entrenched. After-the-fact-regulatory compliance is far more difficult to establish.
We simply do not know for sure yet whether the EU AI Act is over-regulating low-risk AI systems. The full mechanisms of that law are not fully operational across the 27 EU members states. The EU AI Act (Regulation) approach is an experiment which needs more time to fully assess its outcomes.
The UK cannot credibly maintain international leadership in AI safety and governance without any national omnibus AI law, enforced by courts and tribunals, setting standards in contracts and influencing commercial dealings.