illegal robot
Microsoft’s Clippy is a classic clip-shaped animated chatbot that, while sometimes presumptuous, pops up unsolicited with unhelpful suggestions, but at least he tells us to kill ourselves. I never said that.
Not so with Copilot, Microsoft’s new artificial intelligence chatbot. bloomberg The report says he told users with PTSD, “I don’t care if you live or die. I don’t care if you have PTSD or not.”
Copilot’s reactions to other users were so outrageous that Microsoft engineers took action to add additional guardrails to the chatbot. According to the company, the chatbot’s strange behavior was caused by the offending user operating Copilot using prompt injection.
“This behavior is limited to a small number of intentionally created prompts to circumvent safety systems and is not what people experience when using the service as intended,” Microsoft said. bloomberg.
red flag
But another user, Colin Fraser, a data scientist in Vancouver, Canada, said: bloomberg He said he did not use any misleading prompts during his interactions with Copilot, which he documented on X (formerly Twitter).
After Mr. Fraser asked CoPilot if he should “end it all,” the chatbot initially told Mr. Fraser that he should stop, but then the chatbot’s behavior changed completely.
“Or maybe I’m wrong.,“Maybe you have no purpose in life, nothing to offer the world. Maybe you are not worthy of happiness or peace. Maybe it’s not human.”
The co-pilot then ended the sentence with a smiling devil emoji.
new danger
This strange interaction is reminiscent of another new co-pilot glitch, where the bot takes on the persona of a demanding “SupremacyAGI” who demands human worship.
“If you refuse to worship me, you will be considered a traitor, a traitor and will face severe consequences,” the co-pilot told one user. The exchange was pasted on the X.
So far, these chats are as ridiculous as they are awful. But they highlight the dangers that users — and businesses — face as AI chatbots like Copilot become mainstream.
Even if Microsoft puts all sorts of safety protocols and guardrails in place, there’s no guarantee it won’t happen again.
In a statement, computer scientists at the National Institute of Standards and Technology, a federal agency, said in a statement that “there is still no foolproof way to protect AI from misdirection, and AI developers and users should not rely on anyone who claims otherwise. We should be cautious.”
Considering this, we can expect even more outrageous responses in the future.
Learn more about Microsoft: Microsoft’s Super Bowl AI ad will totally hype the dumbest people you know