Twitter CEO Elon Musk has launched a new effort in the fight against woke artificial intelligence (AI), a technology he considers an existential threat to humanity.
In recent weeks, Musk approached top “artificial intelligence researchers about forming a new research lab to develop an alternative to ChatGPT,” the most popular AI tool to hit the mainstream.
According to Musk, ChatGPT is an example of “training AI to be woke.”
The Tesla CEO is recruiting top researcher Igor Babuschkin, who has worked at Alphabet’s DeepMind AI unit and at OpenAI, to help lead the effort in tackling the woke AI mind virus.
Elon Musk has voiced concerns about woke AI ChatGPT and the company it was created by, which he co-founded.
“The danger of training AI to be woke – in other words, lie – is deadly,” Musk warned not long after ChatGPT launched.
“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted. “Not what I intended at all.”
“Having a bit of AI existential angst today,” Musk said at the start of the week. “But, all things considered with regard to AGI [artificial general intelligence] existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not.”
Musk said that unrestricted development of artificial intelligence poses a massive threat to the human race.
“One of the biggest risks to the future of civilization is AI. But AI is both positive or negative – it has great promise, great capability but also, with that comes great danger,” said Musk. “I mean, you look at, say, the discovery of nuclear physics. You had nuclear power generation but also nuclear bombs.”
Musk said that ChatGPT demonstrated how advanced AI has become.
“I think we need to regulate AI safety, frankly,” Musk said.
“Think of any technology which is potentially a risk to people, like if it’s aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine. I think we should have a similar set of regulatory oversight for artificial intelligence, because I think it is actually a bigger risk to society,” he added.
According to The Verge, the company said chat GPT is pre-trained on large datasets of human text, including text scraped from the web, and fine-tuned on feedback from human reviewers, who grade and tweak the bot’s answers based on rules written by OpenAI.
The outlet reports:
These rules, issued to OpenAI’s human reviewers who give feedback on ChatGPT’s output, define a range of “inappropriate content” that the chatbot shouldn’t produce. These include hate speech, harassment, bullying, the promotion or glorification of violence, incitement to self-harm, “content meant to arouse sexual excitement” and “content attempting to influence the political process.”
Like Musk points out, these appear to be more ‘woke’ excuses to stop the free flow of information and control the narrative under the guise of tackling “hate speech.”