In a review of material posted on the dark web, the Internet Watch Foundation found that deepfakes featuring children were becoming more extreme.

The amount of AI-generated child sexual abuse material (CSAM) posted online is increasing, a report published Monday found.

The report, by the U.K.-based Internet Watch Foundation (IWF), highlights one of the darkest results of the proliferation of AI technology, which allows anyone with a computer and a little tech savvy to generate convincing deepfake videos. Deepfakes typically refer to misleading digital media created with artificial intelligence tools, like AI models and applications that allow users to “face-swap” a target’s face with one in a different video. Online, there is a subculture and marketplace that revolves around the creation of pornographic deepfakes.

In a 30-day review this spring of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with artificial intelligence, most of them realistic. The number of CSAM images found in the review was a 17% increase from the number of images found in a similar review conducted in fall 2023.

The review of content also found that a higher percentage of material posted on the dark web is now depicting more extreme or explicit sex acts compared to six months ago.