The mass availability of generative AI technologies such as ChatGPT and Google Bard is a top concern for enterprise risk managers, according to a new study.
Generative AI became the second most cited risk in Gartner’s Q2 survey, making it into the top 10 for the first time.
Ran Xu, research director for Gartner’s risk and audit practice, said: “This reflects both the rapid growth in public awareness and use of generative AI tools and the breadth of potential use cases, and therefore potential risks, that these tools pose.”
need for speed
As innovation in generative artificial intelligence continues at a breakneck pace, concerns about security and risk are becoming increasingly prominent.
Some lawmakers are calling for new rules and regulations for AI tools, and technology companies and business leaders are proposing a moratorium on training AI systems to assess safety.
However, many experts believe the genie is out of the bottle, so risk managers should focus on managing their exposure rather than hoping for a slowdown in technology adoption.
“Organizations need to act now to develop an enterprise-wide strategy for AI trust, risk, and security management (AI TRiSM).”
Avivah Litan, VP Analyst at Gartner, said: Organizations need to act now to develop an enterprise-wide strategy for AI trust, risk, and security management (AI TRiSM).
“There is an urgent need for a new class of AI TRiSM tools to manage data and process flows between users and enterprises hosting generative AI-based models.”
Currently, the market does not provide users with systematic privacy guarantees or effective content filtering in their interactions with these models (e.g. to filter factual errors, hallucinations, copyrighted material or confidential information). There are no off-the-shelf tools that provide this.
Litan said AI developers urgently need to work with policymakers, including potential new regulators, to establish policies and practices for oversight and risk management of generative AI. He added that there is.
Four key risks to manage
When it comes to managing enterprise risk, there are several key themes that must be addressed. these are:
intellectual property
It is important to educate company leaders about the need for care and transparency when using generation tools so that intellectual property risks can be appropriately mitigated from both an input and output perspective.
Mr. Xu explained: “The information that is input into a generative AI tool can become part of its training set, which means sensitive or confidential information can be included in the output of other users.
“Furthermore, when you use the output from these tools, there is a good chance that you may inadvertently infringe on the intellectual property rights of others who use them.”
data privacy
Generative AI Tools may share your information with third parties, such as vendors and service providers, without prior notice.
This may violate privacy laws in many jurisdictions.
For example, regulations have already been introduced in China and the European Union, and proposed regulations have also been floated in the United States, Canada, India, the United Kingdom, and elsewhere.
cyber security
Hackers are always testing new technology in search of ways to subvert it for their own purposes, and generative AI is no exception.
Mr. Xu said: “We are looking at examples of malware and ransomware code generated by generative AI that can be tricked, as well as examples of ‘prompt injection’ attacks that can trick these tools into giving information they shouldn’t. I’ve seen it.
“This has led to the industrialization of sophisticated phishing attacks.”
“Hallucinations”, fabrications and deepfakes
Fabrications, including “hallucinations” and factual errors, are already emerging problems in generative AI chatbot solutions.
Training data can lead to biased, off-base, or incorrect responses, which can be difficult to identify, especially as solutions become increasingly reliable and reliable. there is.
Deepfakes, where generative AI is used to create malicious content, represent a significant risk for generative AI.
These fake images, videos, and audio recordings can attack celebrities and politicians, create and spread misleading information, or even create fake accounts or hijack existing legitimate accounts. It has been used for infiltration.
Mr. Litan said: “In a recent example, an AI-generated image of Pope Francis wearing a fashionable white down jacket went viral on social media.
“While this example seemed innocuous at first glance, it provided a glimpse into a future where deepfakes create significant reputational, counterfeit, fraud, and political risks for individuals, organizations, and governments.”
How risk experts manage generated AI risks
There are two general approaches to leveraging ChatGPT and similar applications.
The first is to use an out-of-the-box model that uses these services out-of-the-box, without direct customization.
The second approach is prompt engineering. It uses tools to create, adjust, and evaluate prompt inputs and outputs.
“Establish a governance and compliance framework for enterprise use of these solutions.”
Litan said: “To be ready for use, organizations must manually review all model outputs to detect inaccuracies, misinformation, or biased results.
“Establish a governance and compliance framework for your company’s use of these solutions, including clear policies that prohibit employees from asking questions that expose your organization’s sensitive or personal data. I will.”
Organizations should also monitor unauthorized use of ChatGPT and similar solutions using existing security controls and dashboards to discover policy violations.
“You must take steps to protect internal and other sensitive data used to create prompts on third-party infrastructure.”
For example, firewalls can block access for corporate users, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor unauthorized API calls.
Litan added: “All of these risk mitigations apply to get engineering up to speed.
“Additionally, you should take steps to protect internal and other sensitive data used to design prompts on third-party infrastructure. Create designed prompts as immutable assets and you save.
“These assets can represent vetted engineering prompts that are safe to use. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared, and sold. can.”