Me too. Even after a tragedy caused by Google’s awful AI. My situation was that I uploaded old family photos in 2022, looking for photos of my best friend who passed away from cancer. A week later Google disabled my decade old account without warning in the middle of the night. No explanation. They just said “harmful content” was found and linked to a page of the most heinous accusations you could imagine. My appeal was rejected within hours. The message changed several months later to say “child abuse”, after an NYT article on the issue. Some of my family has changed their habits, but not really by a lot. Most still use Google services a lot. I am still in therapy because of the damage they caused. Some people blame me for not having a back-up. I usually did, but was in sick a sad state when I lost my friend that I want really thinking about it clearly and definitely didn’t have reason to expect anything like what they did.
My point is that it is not black and white or as simple as don’t download it. There are plenty of cases in which a person would not know such as downloading an AI training set (https://www.404media.co/a-developer-accidentally-found-csam-in-ai-data-google-banned-him-for-it/). If they truly wanted to follow the law, it would be knowing possession that should end with a persons account being terminated. All other cases should end with maybe the file reported and deleted. But their system is highly flawed and most appeals are denied, which is nonsense when less then 1% of these reports end up with an arrest and even fewer lead to convictions (https://stacks.stanford.edu/file/druid:pr592kc5483/cybertipline-paper-2024-04-22.pdf).
All Google, and the others, are doing is over reporting and making harder to find actual criminals. It hardly worth celebrating when one is caught while thousands of innocent people are being harmed. There needs to be penalties for false reports or an ability for people to reclaim their data/accounts when cleared of wrongdoing. The number of false positives is absurd and Facebook and LinkedIn researchers have both found it to be highly erroneous (https://www.eff.org/deeplinks/2022/08/googles-scans-private-photos-led-false-accusations-child-abuse?language=en).
I think we desperately need data privacy and data protection laws. And the “think of the children” or “I have nothing to hide” arguments against them are just trickle down ideas from these data brokers who profit heavily from invading personal data.
Me too. Even after a tragedy caused by Google’s awful AI. My situation was that I uploaded old family photos in 2022, looking for photos of my best friend who passed away from cancer. A week later Google disabled my decade old account without warning in the middle of the night. No explanation. They just said “harmful content” was found and linked to a page of the most heinous accusations you could imagine. My appeal was rejected within hours. The message changed several months later to say “child abuse”, after an NYT article on the issue. Some of my family has changed their habits, but not really by a lot. Most still use Google services a lot. I am still in therapy because of the damage they caused. Some people blame me for not having a back-up. I usually did, but was in sick a sad state when I lost my friend that I want really thinking about it clearly and definitely didn’t have reason to expect anything like what they did.
Google, they report cartoon images and family photos as well. Forbes for reported on it (https://www.forbes.com/sites/thomasbrewster/2021/12/20/google-scans-gmail-and-drive-for-cartoons-of-child-sexual-abuse/). They closed a biggish profile YouTube channels account for cartoons as well (https://en.wikipedia.org/wiki/Naoki_Saito). Same for family photos/medical photos, of which there are plenty of reports from various news networks, the most prominent were probably the three from the NYT (https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html, https://www.nytimes.com/2022/12/30/technology/google-appeals-change.html, and https://www.nytimes.com/2023/11/27/technology/google-youtube-abuse-mistake.html). Plus more from El Pais (https://english.elpais.com/science-tech/2022-09-19/google-closed-my-account-over-sexual-content-but-theyre-not-telling-me-what-it-is-and-ive-lost-everything.html), Buisness Insider (https://www.businessinsider.com/google-users-locked-out-after-years-2020-10?op=1), Android Police (https://www.androidpolice.com/2021/03/08/when-google-locks-you-out-of-your-account-begging-the-internet-for-help-is-your-first-and-last-resort/), India Times (https://english.elpais.com/science-tech/2022-09-19/google-closed-my-account-over-sexual-content-but-theyre-not-telling-me-what-it-is-and-ive-lost-everything.html), etc. And tons of self reporting (https://piunikaweb.com/2026/02/03/google-photos-false-csam-flags-users-locked-out/).
My point is that it is not black and white or as simple as don’t download it. There are plenty of cases in which a person would not know such as downloading an AI training set (https://www.404media.co/a-developer-accidentally-found-csam-in-ai-data-google-banned-him-for-it/). If they truly wanted to follow the law, it would be knowing possession that should end with a persons account being terminated. All other cases should end with maybe the file reported and deleted. But their system is highly flawed and most appeals are denied, which is nonsense when less then 1% of these reports end up with an arrest and even fewer lead to convictions (https://stacks.stanford.edu/file/druid:pr592kc5483/cybertipline-paper-2024-04-22.pdf).
To be honest, I don’t think this is all a failure of Google or Meta or Microsoft, but the NCMEC and Thorn. They are the real threat to child safety, as they use their platform to claim to want to save children, but have other agendas (https://www.techdirt.com/2024/08/08/the-many-reasons-why-ncmecs-board-is-failing-its-mission-from-a-ncmec-insider/ and https://www.jezebel.com/ashton-kutcher-thorn-sex-workers-1850852760). Plus, at least Thorn has been found to lie about their numbers of children rescued (https://www.snopes.com/fact-check/kutcher-software-child-trafficking/).
All Google, and the others, are doing is over reporting and making harder to find actual criminals. It hardly worth celebrating when one is caught while thousands of innocent people are being harmed. There needs to be penalties for false reports or an ability for people to reclaim their data/accounts when cleared of wrongdoing. The number of false positives is absurd and Facebook and LinkedIn researchers have both found it to be highly erroneous (https://www.eff.org/deeplinks/2022/08/googles-scans-private-photos-led-false-accusations-child-abuse?language=en).
I think we desperately need data privacy and data protection laws. And the “think of the children” or “I have nothing to hide” arguments against them are just trickle down ideas from these data brokers who profit heavily from invading personal data.