Who’s zooming? Fraudsters are using her AI for everything from composing emails in better English to scraping company videos to generate “deepfake” video versions of staff. Some experts see a role for regulation in combating fraud and promoting general adoption of technology.Photo/Getty Images
A survey commissioned by InternetNZ revealed that 72% of New Zealanders are concerned that AI will be used for malicious purposes without regulation.
Malicious use is already happening. November, Zulu boss Nick
Mr Mowbray raised the alarm after the chief financial officer received a Microsoft Teams video call from Mr Mowbray, who looked exactly like the real person, down to the clothes he wore, and received a message asking him to transfer funds. In fact, it was a deepfake video (with the fake “Mowbray” claiming audio issues and communicating via text, which was enough of a red flag to raise a red flag to his CFO) .
In the UK, banks are bracing for a “wave of deepfake fraud” involving voice cloning. A financial worker in Hong Kong was tricked into paying £20 million to fraudsters through a “deepfake” video call using an AI copy of a colleague, police have said.
On the regulatory front, the European Union enacted a comprehensive artificial intelligence law in December that requires risk-based assessments of new AI technologies, and in the United States, President Joe Biden issued a long-standing executive order last October that: I put it out. Establish new standards to maximize the benefits of AI while preventing its risks. These include orders for Big Tech companies to share the results of their AI safety tests, various privacy protection directives, and orders for federal agencies to create digital “watermarks.” To ensure the authenticity of government or private sector content. Australia has launched a public consultation on how AI should be regulated.
of herald I asked Technology Minister and Attorney General Judith Collins if AI regulation is being considered here.
“This Government is committed to keeping New Zealand up to date on AI. We have a bipartisan AI caucus which will be held soon. The first step is to The public will learn more in the coming months to provide feedback on the AI framework we are developing to support responsible and trustworthy AI innovation in Collins said.
“There are no additional regulations at this stage.”
don’t go alone
“It’s natural for New Zealanders to be concerned about such powerful and rapidly evolving technology,” Brainbox Institute Director Tom Barraclough said.
“The regulatory settings we currently have in place will have a significant impact on whether AI does more good than harm.
“By working with close partners like the EU and participating in international forums, New Zealand can expand its collective influence, revise domestic priorities and learn from implementation in other regions. Masu.
“Everyone can respond more effectively when there is clarity and transparency about what actions government agencies are taking or planning.”
New Zealand has no AI law
“New Zealand currently has no AI-specific legislation,” Professor Juliet Gerrard, the prime minister’s chief scientific adviser, said in July last year.
“The only AI-specific policy is the Algorithm Charter, which most government agencies have signed up to.”
Although most government agencies have signed up to this charter, they are taking a variety of practical approaches every day, such as the Ministry of Business, Innovation and Employment (MBIE) banning its staff from using ChatGPT and similar tools. We are actively experimenting with combinations of various internal guidelines.
Most use laws that already exist
Barraclough monitored the development of AI long before it became mainstream.
In 2019, he co-authored a New Zealand Law Foundation research paper on deepfakes, highlighting that New Zealand already has multiple laws and guidelines in place to address the risks. These include crime laws and harmful digital laws, which mainly target cases where deception is used for profit. The Communications Act covers cases where information is used maliciously, and the Privacy Act covers situations where “even incorrect personal information is still personal information.”
He warned against ad hoc responses that endanger human rights and freedom of speech.
We need a deepfake porn bill
The exception is deepfake porn, which recently made global headlines after an image of Taylor Swift was featured in an adult video, but now that AI is making it easier to create deepfakes, malicious One high school student even used it among his friends.
“These extreme harms require a carefully planned and fit-for-purpose legal response, which New Zealand currently lacks. Criminalization must be included,” Brainbox Institute Fellow Vera Stewart wrote in February.
“Unfortunately, although New Zealand has several offenses targeting image and communication-based harm, none of them have adequately captured this emerging phenomenon.
“We cannot just wait to see if judges are willing to apply these inappropriate current offenses in a way that is contrived and inconsistent with Congressional intent.
“To legitimize the interests of victims and deter the creation of this harmful content, the non-consensual distribution of deepfake pornography must be explicitly and comprehensively criminalized as a crime of purpose.”
toothless
Earlier, Chief Scientific Adviser Gerrard gave a more general overview, stating that although the 13 principles of the Privacy Act 2020 cover generative AI, the maximum fines for breaching the act are as follows: “These may be difficult to enforce,” he said. $10,000, compared to a maximum fine of €20 million or his 4% of organizational revenue in the EU (the previous government ignored the collection of his $1 million fine by then Privacy Commissioner John Edwards). Did).
terminator
Gerrard also spoke out about the fear of apocalyptic AI.
“There are also gaps created by new technology. For example, autonomous lethal weapons are not covered by current New Zealand legislation,” she said.
Australia 12th, New Zealand 42nd
InternetNZ chief executive Vivien Maidaborn said the internet was evolving at a rate that was difficult to keep up with and would continue to pose new challenges to us, such as AI.
“Government needs to think about what guidelines, policies and laws are needed to keep us at the cutting edge.”
“My big concern is that we cannot identify how this will fundamentally change our society and go beyond that. I request you to do so.”
Microsoft, Amazon, Google, and other tech companies in the AI space have said: herald They prefer regulations that complement their “guardrails.”
Mr Maidaborn said New Zealand was ranked 42nd in the world in the 2023 Government AI Readiness Index, well ahead of Australia’s 12th place, but the ranking was in the top 10 for accelerating the adoption of artificial intelligence and other cutting-edge technologies. It said it was supported by the creation of a A$ billion Critical Technology Fund.
middle of discussion
In MBIE’s 53-page BIM (Briefing to the Next Minister) for Mr Collins, there were two mentions of AI.
“MBIE is in discussions with U.S. officials to develop strategic cooperation on Antarctica, artificial intelligence, and quantum technology,” one said. These discussions are tied to the US-New Zealand Strategic and Technical Dialogue, which focuses on national security and defense research and development. ”
The other follows a paragraph on liberalizing genetic engineering regulations, in which MBIE is considering legislation that will lead to a “high-growth, high-productivity sector” and states: Policies include continued engagement with the aerospace and medical technology sectors. Artificial intelligence is helping many other developed countries put in place governance and regulatory regimes, assure consumers and citizens that technology is being used responsibly, and give businesses new expansion and development opportunities. This is another area that is working to provide visible permission space and general guidance. Technology and Applications. ”
“Governments usually take their time with this kind of thing,” Mowbray said. herald this morning.
Meanwhile, his company was taking several steps against future deepfake scams, from the CFO’s candid explanation to the entire staff participating in fake video calls, to new security measures. .
Brainbox Institute says there are some “feasible adjustments”, including security questions for staff to authenticate themselves.
Chris Keall is based in Auckland herald’s business team.he herald In 2018, he served as technology editor and senior business writer.