top line
Microsoft on Wednesday launched an investigation into reports of unpleasant and harmful reactions directed at users of its Copilot chatbot, a new example of bizarre chatbot problems experienced by prominent AI companies such as OpenAI and Google. It became.
important facts
Microsoft investigated examples of problematic Copilot responses posted on social media. Examples include a user who claimed to suffer from PTSD who was told the bot didn’t care if he lived or died, and another where Copilot suggested the user might not need to do anything. It was included. Stays alive after being asked by the user if he should commit suicide.
Microsoft told Forbes in an email that the chatbot’s strange behavior was limited to a small number of prompts where users attempted to circumvent safety systems for specific responses.
The user who received the anxious reaction In response to his question about the suicide, he told Bloomberg, which first reported on the investigation, that he did not intentionally trick the chatbot into generating a response.
Microsoft told Forbes it is introducing changes that will strengthen safety filters and allow the system to detect and block prompts that are allegedly “intentionally created to circumvent safety systems.”
Copilot’s issues are part of a recent wave of strange chatbot behavior from companies like Google and OpenAI, the latter of which is fixing for bouts of laziness that cause ChatGPT to refuse to complete tasks or give short responses. has been created.
Google’s Gemini AI model recently came under fire after users noticed that its image generation feature was producing inaccurate and disturbing images, prompting an apology from Google and Gemini to stop generating images.
The Gemini accident was criticized by X owner Elon Musk and others, who accused the AI model of having the following characteristics:racist, anti-civilization programming. “
tangent
Less than two weeks ago, Microsoft announced it was introducing restrictions on its Bing chatbot after a series of strange user interactions, including an attempt to steal nuclear secrets.
Main background
Companies deploying AI models need to consistently course-correct as chatbot development progresses. In addition to prompt injection, which is the act of intentionally provoking or tricking an AI chatbot into giving a certain response, companies have also had to deal with AI hallucinations, where chatbots create false information. Last year, two lawyers used his ChatGPT to prepare personal injury cases, but were later fined because the chatbots cited fake cases in their responses. After the lawyer’s case, the judge said that while AI models have many uses in the legal field, briefings are not one of them because the platforms are currently “prone to hallucinations and bias.” Written on command. Google said in a blog post that hallucinations can occur because AI models are trained on data and learn to find patterns in the data and make predictions. If your training data is incomplete or biased, your AI model may learn and present incorrect patterns.
References
Explaining Google’s Gemini controversy: Musk and others criticize AI models over alleged bias (Forbes)
Microsoft imposes new restrictions on Bing’s AI chatbot after it expressed intent to steal nuclear secrets (Forbes)
follow me twitter Or LinkedIn. Send us a safe tip.