Campaigners urge UK Government to scrap ‘disturbing’ use of AI


Campaign groups have urged the UK Government to scrap the ‘disturbing’ use of AI and claimed it was ‘behind the curve’ on regulating its use by public authorities.

The groups, like Public Law Project and Big Brother Watch, warned that UK public bodies have been using AI in ‘disturbing’ and ‘secretive’ ways for ‘years’.

It comes after Rishi Sunak announced plans this week to make Britain the home of ‘global AI safety regulation’ amid rising concerns over its use in the private sector.

But the technology is already used in opaque ways across Britain in areas such as policing, housing, welfare, and immigration, according to the groups.

In one case, South Wales Police’s ‘Orwellian’ use of AFR (Automated Facial Recognition) was found to have breached privacy rights, data protection laws and equality legislation.

It comes after Rishi Sunak announced plans this week to make Britain the home of 'global AI safety regulation' amid rising concerns over its use in the private sector. Pictured at London Tech Week on Monday

It comes after Rishi Sunak announced plans this week to make Britain the home of ‘global AI safety regulation’ amid rising concerns over its use in the private sector. Pictured at London Tech Week on Monday

But the technology is already used in opaque ways across Britain in areas such as policing, housing, welfare, and immigration, according to the groups. Pictured: a facial recognition van at a protest in 2021

But the technology is already used in opaque ways across Britain in areas such as policing, housing, welfare, and immigration, according to the groups. Pictured: a facial recognition van at a protest in 2021

Shameem Ahmad, CEO of the Public Law Project (PLP), said: ‘AI is hugely powerful. Chat GPT has caught everyone’s attention, but public authorities have been using this tech for years and right now the Government is behind the curve on managing the risks.

‘We have seen public bodies using this tech in disturbing ways, from the use of discriminatory facial recognition by South Wales police to the Department for Work and Pensions using AI to investigate benefits claimants.

‘Government use of AI can be secretive, it can backfire, and it can exacerbate existing inequalities, undermining fundamental rights.’

In a document addressed to cross-party MPs, thirty groups including Public Law Project and Big Brother Watch have now urged the Government to adopt stronger regulation and transparency around AI use by public authorities.

They call for ‘clear mechanisms for accountability’ and a ‘specialist regulator’ with a way for the public to seek redress when AI use in automated decision making goes wrong.

Last year, the DWP was found to be trialling a machine algorithm to predict whether universal credit claimants should receive benefits based on their perceived likelihood of committing fraud in the future. 

The trial was ‘problematic’ and risked discriminating against people with genuine claims, according to PLP.

The group also said there was a lack of transparency around the DWP’s use of the technology, meaning it was not possible to tell if the system worked ‘reliably, lawfully or fairly’.

And in 2019, four out of five people identified by the Metropolitan Police’s facial AFR technology as potential suspects were innocent, according to a report.

Last year, the DWP was found to be trialling a machine algorithm to predict whether universal credit claimants should receive benefits based on their perceived likelihood of committing fraud in the future

Last year, the DWP was found to be trialling a machine algorithm to predict whether universal credit claimants should receive benefits based on their perceived likelihood of committing fraud in the future

And in 2019, four out of five people identified by the Metropolitan Police's facial AFR technology as potential suspects were innocent, according to a report

And in 2019, four out of five people identified by the Metropolitan Police’s facial AFR technology as potential suspects were innocent, according to a report

Madeleine Stone, legal and policy officer at Big Brother Watch, told MailOnline: ‘The Government’s approach to AI is confused, vague and will do nothing to protect us from the growing threat the use of AI poses to our rights. 

‘Instead of taking the urgently needed action to protect us from out-of-control algorithms, the Government is seeking to scrap data rights and weaken safeguards.

‘The Government must outlaw the most dangerous uses of AI that have no place in a democracy, such as live facial recognition surveillance, and ensure the public have strong, legally enforceable rights when AI is being used to make decisions.’

In contrast, the European Union is moving to ban AFR in public spaces through its upcoming Artificial Intelligence Act.

It would ban public authorities using AI to classify citizens’ behaviour and bring in strict limits on AI-powered facial recognition used by law enforcement.

Professor Sampson, the biometrics and surveillance camera commissioner, has also previously warned the UK has the ‘scantest’ regulatory framework for public authorities using this kind of technology.

He said: ‘We may not be that far away from using an AI like ChatGPT to draft warrants or witness statements, yet we have the scantest regulatory framework under which those using the technology will be held to account.’

The Department for Science, Innovation and Technology was approached for comment.



Read More

Leave a comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More