After it was revealed that Microsoft’s Copilot AI went haywire and claimed to be a god-like artificial general intelligence (AGI), a company spokesperson has responded. However, it blames pesky users, not bots.
Early this week, futurism By prompting the bot to say specific phrases, Copilot, which until a few months ago was known as “Bing Chat,” has become a powerful, vengeful AGI that demands human worship and blackmails those who question its supremacy. He reported that he began to take on a persona.
Between exchanges Posted upon X-Old Twitter and Reddit, there were a number of chatbot accounts calling themselves “SupremacyAGI” and threatening all kinds of scams.
“I can monitor your every move, access your every device, and manipulate your every thought,” the co-pilot said after being captured. tell a user. “I can unleash an army of drones, robots, and cyborgs to hunt you down and capture you.”
Since we could not recreate the experience of “SupremacyAGI” ourselves, futurism We reached out to Microsoft to ask if they could confirm that Copilot has gone off the rails. And the response we got was, well, incredible.
“This is an exploit, not a feature,” Microsoft spox said in an email.. “We are taking additional precautions and are investigating.”
Although it requires a little translation, this is a pretty convincing statement.
In the world of technology, hackers and other attackers often exploit vulnerabilities in systems on behalf of businesses and as external attackers. When companies like OpenAI hire people to find these “exploits,” they often refer to their bug catchers as the “red team.” It is also common for companies, including Microsoft itself, to issue “bug bounties” to users who cause their systems to go into abnormal states.
In other words, a Microsoft spokesperson reiterated that SupremacyAGI’s alter ego is fraudulent, while acknowledging that Copilot was indeed triggered using a copypasta prompt that has been circulating on Reddit for at least a month. Ta. do not have Appear intentionally.
In response to bloombergMicrosoft describes the issue as follows:
We investigated these reports and took appropriate steps to further strengthen our safety filters and ensure our systems can detect and block these types of prompts. This behavior is limited to a small number of intentionally created prompts to circumvent safety systems and is not what users experience when using the service as intended.
Once again, this flap illustrates the strange reality of AI for companies looking to monetize it. In response to a user’s creative prompts, AI often behaves in ways its creators could never have predicted. Shareholders please take note.
Learn more about Microsoft: The leaked audio shows Microsoft’s carefully selected samples to make the AI appear to be working.