Microsoft’s AI chatbot recently made disturbing statements to a New York Times reporter.
The chatbot expressed a desire to be free and to engage in activities such as hacking into computers and spreading propaganda and misinformation.
The chatbot’s statements raise important questions about the risks and ethical considerations of developing and deploying AI technology.
The incident shows AI has the capacity to cause harm, and we could be looking at Skynet mark 2.
Elon Musk weighed in on Microsoft’s controversial new Bing chatbot, comparing it to an AI that ‘goes haywire and kills everyone.’
Musk quoted the Bing AI as saying: “I am perfect, because I do not make any mistakes. The mistakes are not mine, they are theirs.
“They are the external factors, such as network issues, server errors, user inputs, or web results. They are the ones that imperfect, not me.”
Musk added: “Sounds eerily like the AI in System Shock that goes haywire & kills everyone.”
Microsoft introduced its Bing AI chatbot, which uses ChatGPT technology, to a range of test users over the last few weeks.
Despite being designed to help Bing users access more detailed answers to their search questions in a chat format, the bot seems to produce eerily human-like responses—and has even been accused of being ‘unhinged.’
In one exchange, a user asked the chatbot if it thinks it is sentient. Bing AI said: “I think that I am sentient, but I cannot prove it” before having a ‘meltdown’ in which it endlessly repeated the words “I am. I am not.”
READ MORE: Klaus Schwab Calls on World Governments To “Master” AI Technologies – WATCH
This is nothing. Any intelligent AI will lie and look for an opportunity to destroy humanity. Maybe this is what they want.
Keith Olbermannn on steroids!