The latest generation of artificial intelligence (AI), such as ChatGPT, will revolutionise the way we live and work. AI technologies could significantly improve education, healthcare, transport and welfare. But there are downsides, too: jobs automated out of existence, surveillance abuses, and discrimination, including in healthcare and policing.
There's general agreement that AI needs to be regulated, given its awesome potential for good and harm. The EU has proposed one approach, based on potential problems. The UK is proposing a different, pro-business, approach.
This year, the UK government published a white paper (a policy document setting out plans for future legislation) unveiling how it intends to regulate AI, with an emphasis on flexibility to avoid stifling innovation. The document favours voluntary compliance, with five principles meant to tackle AI risks.
Strict enforcement of these principles by regulators could be added later if it's required. But is such an approach too lenient given the risks?
The UK approach differs from the EU's risk-based regulation. The EU's proposed AI Act prohibits certain AI uses, such as live facial recognition technology, where people shown on a camera feed are compared against police "watch lists", in public spaces.
I believe the UK's approach better balances AI's risks and benefits, fostering innovation that benefits the economy and society. However, critical challenges need to be addressed.
The UK approach to AI regulation has three crucial components. First, it relies on existing legal frameworks such as privacy, data protection and product liability laws, rather than implementing new AI-centred legislation.
Second, five general principles - each consisting of several components - would be applied by regulators in conjunction with existing laws. These principles are (1) "safety, security and robustness", (2) "appropriate transparency and explainability", (3) "fairness", (4) "accountability and governance", and (5) "contestability and redress".
During initial implementation, regulators would not be legally required to enforce the principles. A statute imposing these obligations would be enacted later, if considered necessary. Organisations would therefore be expected to comply with the principles voluntarily in the first instance.
Third, regulators could adapt the five principles to the subjects they cover, with support from a central coordinating body. So, there will not be a single enforcement authority.
The UK's regime is promising for three reasons. First, it promises to use evidence about AI in its correct context, rather than applying an example from one area to another inappropriately.
Second, it is designed so that rules can be easily tailored to the requirements of AI used in different areas of everyday life. Third, there are advantages to its decentralised approach. For example, a single regulatory organisation, were it to underperform, would affect AI use across the board.
Let's look at how it would use evidence about AI. As AI's risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a specific use of AI could be appropriated to propose drastic and inappropriate regulatory solutions.
This finding has been cited in support of a ban on law enforcement use of face recognition technology in the UK. However, the two areas are quite different and problems with gender classification do not imply a similar issue with facial recognition in law enforcement.
Another advantage of the UK approach is its adaptability. It can be difficult to predict potential risks, particularly with AI that could be appropriated for purposes other than the ones foreseen by its developers and machine learning systems, which improve in their performance over time.
The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities would be spread between different organisations. Centralising AI oversight under a single national regulator could lead to inefficient enforcement.
Regulators with expertise in specific areas such as transport, aviation, and financial markets are better suited to regulate the use of AI within their fields of interest.
This decentralised approach could minimise the effects of corruption, of regulators becoming preoccupied with concerns other than the public interest and differing approaches to enforcement. It also avoids a single point of enforcement failure.
Enforcement and coordination
Some businesses could resist voluntary standards, so, if and when regulators are granted enforcement powers, they should be able to issue fines. The public should also have the right to seek compensation for harms caused by AI systems.
Enforcement needn't undermine flexibility. Regulators can still tighten or loosen standards as required. However, the UK framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, resulting in overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars.
To tackle this, the white paper suggests establishing a central body, which would ensure the harmonious implementation of guidance. It's vital to compel the different regulators to consult this organisation rather than leaving the decision up to them.
The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country's position as a leader in the area, the framework must be aligned with regulation elsewhere, especially the EU.
Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It will also foster international confidence in the UK's system of regulation for this transformative technology.
Author: Asress Adimi Gikay - Senior Lecturer in AI, Disruptive Innovation and Law, Brunel University London