OpenAI, the creator of ChatGPT, has changed the fine print of its usage policy to remove certain text related to its use of AI technology and large-scale language models for “military and war” purposes.
Prior to the change on January 10, the usage policy specifically prohibited the use of OpenAI models for content that promotes, encourages, or depicts weapons development, the military, war, or self-harm.
OpenAI said the updated policy condenses the list and makes the document more “readable” while providing “service-specific guidance.”
This list has now been condensed into what the company calls its “universal policy,” which prohibits the use of its services to cause harm to others and prohibits the use of its services to cause harm to others. Reusing or distributing output from the model is prohibited.
An OpenAI spokesperson said: “Our policy is to use our tools to harm people, develop weapons, monitor communications, harm others, or destroy property. I am not allowed to do so.” “But there are also national security use cases that align with our mission. For example, we are already working with DARPA to develop new cybersecurity solutions to protect critical infrastructure and the open source software that industries rely on. Under previous policy, it was unclear whether these beneficial use cases were allowed under “military.” Therefore, the goal of our policy update is to provide clarity and allow these discussions to take place. ”
The change in policy is seen as a gradual weakening of the company’s stance on cooperation with defense and military organizations, but the “frontier risks” posed by AI have been criticized by OpenAI CEO Sam. This has already been highlighted by several experts, including Altman.
Highlighting the risks posed by AI
Last May, hundreds of technology industry leaders, academics, and other prominent figures signed an open letter warning that advances in AI could lead to extinction, and that controlling the technology is a top global priority. said it should be.
A statement released by the San Francisco-based Center for AI Safety said: “Mitigating the risk of AI-induced extinction should be a global priority, alongside other society-wide risks such as pandemics and nuclear war. ” is written.
Ironically, the most prominent signatories at the beginning of the letter included Altman and Microsoft CTO Kevin Scott. Executives, engineers, and scientists from Google’s AI research lab DeepMind also signed the letter.
The first letter opposing the use of AI arrived in March, in which more than 1,100 technology figures, leaders, and scientists wrote to labs conducting large-scale experiments using AI. issued a warning.
OpenAI announced in October that it was preparing a team to prevent what it called frontier AI models from triggering nuclear war or other threats.
“We believe that frontier AI models that exceed the capabilities of state-of-the-art existing models have the potential to benefit all of humanity. But they also pose increasingly serious risks.” the company said in a blog post.
In 2017, an international group of AI and robotics experts signed an open letter to the United Nations calling for an end to the use of autonomous weapons that threaten a “third revolution in warfare.”
These experts also ironically included Elon Musk, who founded an AI company called X.AI to compete with OpenAI.
Reason for concern
There may be further reason for concern. Some researchers argue that so-called “evil” or “bad” AI models cannot be scaled down or trained to be “good” using existing techniques.
A research paper led by Anthropic aimed to see if AI systems could be taught deceptive behaviors and strategies, and showed that such behaviors could be made persistent.
“Such backdoor behavior can become persistent, and standard safety training techniques such as supervised fine-tuning, reinforcement learning, and adversarial training (training to elicit risky behavior and eliminate it) We found that it could not be removed,” the researchers wrote.
“Our results suggest that when a model exhibits deceptive behavior, standard techniques cannot remove such deception and may create a false impression of safety. ” they added.
What’s even more concerning, according to the researchers, is that using adversarial training to thwart such deceptive behavior in a model could make the model better able to recognize backdoor triggers, making it less secure. This means that you can effectively hide your actions.
(This article has been updated with comment from an OpenAI spokesperson.)