The World Economic Forum revealed its plan to combat “disinformation and hate speech” by using “human and artificial intelligence” in the group’s so-called battle against “the dark world of online harm.”
Vice president of Trust and Safety at ActiveFence, Inbal Goldberger, published an op-ed on the global organization’s website outlining a solution to online abuse.
One solution would enable a combination of AI and “subject matter experts” to “detect nuanced, novel online abuses at scale before they reach mainstream platforms.”
Goldberger adds the AI approach to content moderation would allow human and AI teams to flag and remove items deemed high risk after transmitting millions of sources through training sets.
“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in,” Goldberger wrote.
“This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale,” she adds.
Goldberger says that the public perception of events like viruses, wars, and recessions is altered by online access.
In other words, the WEF is planning to censor all counter-narratives by flagging them as “disinformation.”
“Before reaching mainstream platforms, threat actors congregate in the darkest corners of the web to define new keywords, share URLs to resources and discuss new dissemination tactics at length,” Goldberger said.
“These secret places where terrorists, hate groups, child predators and disinformation agents freely communicate can provide a trove of information for teams seeking to keep their users safe,” Goldberger continues.
According to The National Center for Missing and Exploited Children, over 29.3 million child sexual abuse material reports were made to the CyberTipline 2021 – a 35% increase from 2020.
But while the removal of child sexual abuse material is extremely important, there is also the risk of other forms of content outside this realm, like news and politics, which would fall in the same net.
Slippery slope to total authoritarianism
Many arguments against automated censorship, pushed by the Davos-based elite group, say it would lead to a complete loss of free speech.
National security and political warfare consultant and senior fellow at Clairemont Institute, Dave Reaboi, said that the content moderation approach would be “the most monstrous tyranny history has ever seen.”
Orwellian-like censorship tactics have been used heavily by big tech giants for years now, but more of an AI-driven approach could put the final nail in the coffin of free speech.
This is what World Economic Forum founder, Klaus Schwab, said during the 2022 meeting in Davos earlier this year:
“The future is built by us, by a powerful community such as you here in this room.”