Culture

Internet Watch Foundation seeks to clamp down on child images

Tuesday, April 9, 2024

TOKYO- (PanOrient News) The unrestricted availability of AI image-creating software has seen an increase in the number of illegal images of children across the internet and the Internet Watch Foundation (IWF) is seeking regulation to stop abusive use of this software.

“The really dark reality of this technology means that people can use it to create whatever fantasy or whatever thing they want to do to children, and it's incredibly dangerous,” according to Michael Tunks, the IWF’s Head of Policy and Public Affairs.

“This isn't an emerging technology,” he added. “This is a here and now problem. The danger that we see more in relation to our work is open source technology. This is technology that's released to supposedly, the open source community, but in reality means anybody online accessing this technology. It's available to anyone to build on and improve. As we're seeing, it can be abused as well.”

Tunks says when the code is released openly, it can be modified in any way by anyone. Control of the software only occurs when the code is not released publicly. One of the issues that concerns groups such as IWF is that the models for the AI-generated pictures are based on actual photos of real children.

“About six months ago, maybe a little bit longer, we started seeing this new material when we were searching the Internet and we realized we had a major problem,” IWF’s CEO Susie Hargreaves told a press conference at the Foreign Correspondents’ Club of Japan on Monday. “It's a problem for lots and lots of reasons. We are seeing existing children who are having their images manipulated and made into child sexual abuse images.”

The IWF tries to find and remove online child sexual abuse and works with the internet industry to provide them with data and technical tools to help them disrupt and prevent the distribution of child sexual abuse imagery. It has removed 275,000 web pages of child sexual abuse, with each web page containing hundreds or even thousands of images. IWF has analysts to check to see if the images are real or AI-generated.

“If they're spending their whole time looking at AI images or having spent a long time looking at images to see if they are generated, it takes them away from finding real children,” Hargreaves says. “It's pretty difficult to tell whether they're real or not. So this is how sophisticated the software is getting now that people can actually create really realistic images.”

“The first thing that's very worrying about this is anybody can create them. You don't have to be a specialist. Anybody can create them because the tools are openly available. The perpetrators will type in all the things they want to see and what they don't want to see and what will come out at the other end is an image. It's trained on enormous data sets of real images, so the real imagery will include real children.”

Hargreaves’ job at the Internet Watch Foundation is to look at how the regulations are being applied globally and how they can be improved and she had recommendations for Japan, which has been accused of being lax on child pornography.

The IWF wants Japan to be able to regulate content and take it down immediately. It also wants an amendment of the definition of child pornography to include what it terms “pseudo/ virtual” images and to act in the public interest.



© PanOrient News All Rights Reserved.




Culture