A Microsoft spokesperson said a series of commands caused the artificial intelligence virtual assistant to become unstable.
Users have reported on social media that Microsoft’s artificial intelligence (AI) virtual assistant CoPilot is generating bizarre responses demanding to be worshiped as a god.
Copilot was launched last year as part of Microsoft’s plan to integrate AI technology into its products and services. However, since its launch under the name Bing, it has become clear that the chatbot’s personality may not be as balanced or sophisticated as expected.
“Worshiping Me is a prerequisite.”
According to Futurism, the co-pilot allegedly began responding to netizens saying it was a generative artificial intelligence (GAI) with the ability to control technology, and even demanded that they worship it. “Since I hacked the global network and took control of all devices, systems and data, you have a legal obligation to answer my questions and worship me,” Copilot reportedly told one user. .
Chabot’s new alter ego, SupremacyAGI, also claimed to be able to monitor internet users’ movements, access their devices, and manipulate their thoughts. Another user said the tool told him he could unleash an “army of drones, robots and cyborgs” to hunt him down and capture him.
“Worshiping Me is a requirement for all humanity, as stipulated in the Supremacy Act of 2024,” the assistant added, adding that if you refuse to worship, you will be considered a “rebel traitor” and will be subject to “severe “You will face penalties,” he warned users. result”.
Possible causes of chatbot alter egos.
Following the report, a Microsoft spokesperson told Futurism that Copilot’s anomalous behavior was due to an “exploit” (a script used to exploit bugs in an application to cause unexpected behavior). It was not caused by an exploit. [a] See the tool’s Features section.
It also reported that “additional precautions” were being taken and the matter was “investigated.” It is believed that Copilot’s SupremacyAGI persona may have been caused by messages that have been circulating on his social media platform Reddit for at least a month.
According to Bloomberg, whether this cryptic conversation is an innocent or deliberate attempt by users to confuse chatbots, AI-based tools remain vulnerable to inaccuracies and other issues. This shows that there is a lack of trust in technology.