Experts are warning that the rise of artificial intelligence (AI) is helping child sex offenders create realistic images of children in sexual settings, which could increase real-life sex crimes against children by normalising pedophilia.
AI platforms that can create realistic images are becoming more advanced following the release of chatbot ChatGPT.
The National Crime Agency (NCA) warned last week machine-generated explicit images of children are having a “normalizing” effect on pedophilia.
“We assess that the viewing of these images – whether real or AI-generated – materially increases the risk of offenders moving on to sexually abusing children themselves,” NCA Director General Graeme Biggar said in a recent report.
According to the agency, 830,000 adults pose some type of sexual danger to children in the UK alone.
Biggar said the figure is ten times greater than the U.K.’s prison population.
“[The estimated figures] partly reflect a better understanding of a threat that has historically been underestimated, and partly a real increase caused by the radicalizing effect of the internet, where the widespread availability of videos and images of children being abused and raped, and groups sharing and discussing the images, has normalized such behavior,” Biggar said.
In the US, a similar rise in using AI to create sexual images of children is taking place as pedophilia become more rife.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” Rebecca Portnoff, the director of data science at Thorn, a nonprofit that works to protect kids, told the Washington Post last month.
“Victim identification is already a needle-in-a-haystack problem, where law enforcement is trying to find a child in harm’s way,” she said.
“The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
As The New York Post reports:
Popular AI sites that can create images based on simple prompts often have community guidelines preventing the creation of disturbing photos.
Such platforms are trained on millions of images from across the internet that serve as building blocks for AI to create convincing depictions of people or locations that do not actually exist.
Midjourney, for example, calls for PG-13 content that avoids “nudity, sexual organs, fixation on naked breasts, people in showers or on toilets, sexual imagery, fetishes.”
While DALL-E, OpenAI’s image creator platform, only allows G-rated content, prohibiting images that show “nudity, sexual acts, sexual services, or content otherwise meant to arouse sexual excitement.”
However, dark web forums of people with ill intentions discuss workarounds to create disturbing images, according to various reports on AI and sex crimes.
Biggar noted that the AI-generated images of children also throw police and law enforcement into a maze of deciphering fake images from those of real victims who need assistance.
“The use of AI for this purpose will make it harder to identify real children who need protecting and further normalize child sexual abuse among offenders and those on the periphery of offending. We also assess that viewing these images – whether real or AI-generated – increases the risk of some offenders moving on to sexually abusing children in real life,” Biggar said in a comment provided to Fox News Digital.
“In collaboration with our international policing partners, we are combining our technical skills and capabilities to understand the threat and ensure we have the right tools to tackle AI-generated material and protect children from sexual abuse.”