

I think such matters should be kept strictly out of corporate hands, or be completed with total oversight.


I think such matters should be kept strictly out of corporate hands, or be completed with total oversight.


Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.
Or is it simply a matter of spite?
Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.
As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.
I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.


Victims’ consent is paramount. I thought I made it clear enough when I gave the example of police investigations asking for that, but apparently not.
In any case, this is exactly why I’d like to see more research being done on the topic. First things first, we need to know if it even works. For now it’s a wild guess that it probably should.


That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.
Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around. While controversial, this came as a net benefit as it allowed to reduce the amount of avenues for CSAM sharing and the amount of people able to do so.
The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.
That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.


Honestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.
Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction effect with AI materials.
(Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).


I feel like our relationship to it is also quite messed.
AI doesn’t actually undress people, it just draws a naked body. It’s an artistic representation, not an X-ray. You’re not getting actual nudes in this process, and AI has no clue how the person looks like naked.
Now, such images can be used to blackmail people, because again, our culture didn’t quite catch up with the fact that every nude image can absolutely be AI-generated fake. When it does, however, I fully expect creators of such things to be seen as odd creeps spreading their fantasies around and any nude imagery to be seen as fake by default.
Loss meme reference


Microsoft’s Windows chief Pavan Davuluri had earlier hinted at such plans already about how the next evolution of OS will make it capable enough to “semantically understand you” as Windows will get “more ambient, more pervasive, more multi-modal”. Using features like Copilot Vision it will be able to “look at your screen” and do more.
Since when did corpos try to reframe the word “pervasive” as something positive?
To me, it’s more like the Netherlands giving out free syringes and needles so that drug consumers at least wouldn’t contract something from the used ones.
To be clear: granting any and all pedophiles access to therapy would be of tremendous help. I think it must be done. But there are two issues remaining: