NewsPhotography

Google Flags Photos of Father’s Sick Son as Child Abuse, Informs Police

Google

Google flagged photos that a concerned father took of his sick child as child sexual abuse material (CSAM) and sparked a police investigation after it reported him.

A report from the New York Times outlines the case of a man known only as Mark, who was investigated over photos he sent to a doctor because his son had developed an infection in an intimate area in February 2021.

After Mark’s son fell ill, an online consultation was scheduled because of the ongoing pandemic restricting in-person meetings. A nurse asked the boy’s parents to send photos to the doctor in advance, which were uploaded to Google’s cloud.

The medical episode was quickly cleared up, but Mark was left with a far bigger issue when a couple of days later he was notified that his Google account had been disabled because of “harmful content.”

A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence software to confirm they met the federal definition of child sexual abuse material.

He quickly appealed to Google, but he had already lost access to his emails, contacts, documents, and phone contract and the San Francisco Police Department had started their investigation.

No Crime Committed

Several months later, Mark was cleared by the police. He was told that authorities had accessed his internet searches, his location history, his messages, and every photo and video he had ever stored on the cloud.

After the police had reviewed Mark’s personal information they had concluded no crime had occurred.

However, Mark did not get his Google account back and law professor Kate Klocick tells the Times that the parents could have potentially lost custody of the child.

How Google Flags Images

In 2021, Google filed over 600,00 reports of CSAM, disabling the accounts of over 270,000 users.

The tech giant uses it Content Safety API AI toolkit to scan photos for CSAM. When it flags an exploitative image, Google is required by federal law to report the potential offender to the National Center for Missing and Exploited Children (NCMEC).

“Child sexual abuse material (CSAM) is abhorrent and we’re committed to preventing the spread of it on our platforms,” Google spokesperson Christa Muldoon tells The Verge.

“We follow U.S. law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms,” she adds.

Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.”

Jon Callas, of the Electronic Frontier Foundation, tells the Times that the scanning of photos is “instrusive” and a family photo album on someone’s personal device should be a “private sphere.”

“This is precisely the nightmare that we are all concerned about,” Callas says. “They’re going to scan my family album, and then I’m going to get into trouble.”


Image credits: Header photo licensed via Depositphotos.


Author: Matt Growcoot
Source: Petapixel

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!