Artificial Intelligence and Machine Learning, Government, Industry-Specific
Akshaya Asokan (asokan_akshaya) •
December 13, 2023
Michelle Donnellan, secretary of state for science, innovation and technology, said on Wednesday that the UK government is in no hurry to legislate for artificial intelligence, saying a strict regulatory approach to AI risks stifling innovation in this emerging economic sector. he warned.
Related item: Splunk for public sector utilities
Donnellan testified Wednesday before the Congressional Science, Innovation and Technology Committee.
In a letter in August, MPs on the committee expressed concern that the government was not prioritizing AI regulation, saying that the UK's slow response in regulating this technology was undermining the UK's status “as a center for AI research”. (See: UK MPs call for rapid introduction of AI policies).
The European Union's AI law has heightened concerns among commission members, who warned that it could be “difficult” to depart from the European General Data Protection Regulation, citing its global reach as an example. .
At Wednesday's hearing, Stephen Metcalfe, MP for South Basildon and East Thurrock, raised concerns about whether EU regulations give the trading bloc any advantage over the UK.
Department of Science, Innovation and Technology (DIST) Secretary Donnellan said the department is in no hurry to legislate the technology. He said the government's goal is to first assess the risks posed by technology before regulating it.
“The downside to this bill is that it would take too long because technology is evolving so fast,” Donnellan said. “We're not saying we'll never regulate AI. Rather, the point is that we don't want to rush and accidentally stifle innovation.”
Unlike the EU's AI Safety Institute, which is likely to be operational within the next two years, the UK Safety Institute announced on the sidelines of the UK AI Safety Summit has already started work and is currently evaluating the model. Mr Donnellan said he was in a position to do so. .
He also highlighted voluntary efforts by top AI companies to audit their algorithms before releasing them to the market, a move that would help the UK regulate the technology in the absence of legislation. Stated.
The March AI Policy directs national data, competition, healthcare, media and financial regulators to monitor AI in their jurisdictions, resulting in independent oversight by these agencies. Became.
Last week, the UK Competition and Markets Authority announced an investigation into Microsoft's stake in ChatGPT maker OpenAI, and the Information Commissioner's Office fined Clearview AI £7.5 million for privacy breaches.
Some experts have previously stated that this approach risks creating regulatory fragmentation and duplication within different sectors (see below). UK AI leadership targets are 'unrealistic', experts warn).
When committee chair Greg Clark asked how the government intended to tackle potential policy fragmentation, Mr Donnellan said the department would create a central regulator within the organization to co-ordinate oversight of AI. He said he is working to strengthen the.
“One of its key features is horizon scanning to help regulators identify some of the gaps in policy implementation and support their operations.”
In November, the UK's National Cyber Security Agency warned that attackers were likely to use advances in artificial intelligence to disrupt the UK general election due to take place in 2025 (see: UK NSCS highlights risks to critical infrastructure).
Mark Clarkson, MP for Heywood and Middleton, questioned the steps taken by DSIT to combat AI-generated deepfakes aimed at election security. Donnellan said the government is working with allies and tech companies, including social media platforms, to develop guidelines for watermarking AI content and identifying AI-generated material.
Mr Donnellan said: “Are we hopeful that by the next general election we will have a robust mechanism in place to be able to tackle these issues? Yes, you are right.”