Google’s AI Mistakes Medical Pictures for Child Abuse, Exposes Silicon Valley’s Broken Review System

Silicon valley companies make a big show about user privacy and space, but most of them usually don’t care beyond meeting a few base regulatory standards. Recently, The New York Times reported on a story where a toddler’s parents were banned by Google for life, over their own toddler’s photos which were backed up to the cloud.

Mark, who works remotely from his San Francisco home, noticed an infection in his son’s groin region. His wife used his phone to take a few high quality pictures, and upload them to the health care provider’s messaging system, ahead of consultation with a doctor. Thankfully it was a routine infection which cleared up after a few bouts of prescribed antibiotics. 

Unfortunately, for Mark, Google flagged the images on his phone as potential child sexual abuse material (CSAM) and locked his account. This was done in just two days after the pictures were taken, with Google stating the account was locked due to harmful content, which might have been illegal according to Google’s policies. 

At this point, when the account has been flagged for CSAM material, there’s little recourse for anyone. These incidents are immediately reported to the National Center for Missing and Exploited Children (NCMEC), which then reviews the material. Meanwhile, Mark was having a hard time as he lost access to all his Google services, including Gmail, his Google Fi mobile service and more. 

These days, a person’s online profile is a big part of their identity. Usually people have all their data backed up to cloud, including their contacts and photos. Banning an account is routine work for these companies, but it does affect an individual’s life greatly.

Mark was hoping a human would be a part of the review process somewhere, and that his account would get unbanned when someone looked at his form requesting a review of Google’s decision, where he explained the whole ordeal. 

Big Tech’s Broken Review System

Apple’s CSAM system overview

Below, we’ve attached an excerpt from Apple’s whitepaper on CSAM, the system relies on a database provided by NCMEC, in which any and all matches are instantly reported. But the system is also able to detect potentially abusive pictures outside the database using AI, and exploitative material which isn’t part of the database, it is usually escalated as it can mean a new victim which hasn’t yet been identified. 

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value.

Apple CSAM Technical Summary 

But as the NYT report rightly states,not all photos of naked children are pornographic, exploitative or abusive“. There is a lot of context involved too, and ban trigger happy tech companies don’t really help with the problem. All being said, CSAM is a brilliant tool, and has likely saved thousands of children from potential abuse. The problem lies with the silicon valley giants, whose review processes are highly automated and dysfunctional, and even after proving your innocence it’s hard to get your account restored.  

As for Mark, he faced something similar. He received an envelope in December 2021, from the San Francisco Police Department. The envelop had all the details of the investigation, including details of all the data provided to law enforcement by Google.

The investigator however, after reviewing all the data, exonerated Mark of any wrongdoing and the case was closed. But that wasn’t enough for Google, as Mark still couldn’t access any of his Google accounts.  He even thought of suing Google at one point, but figured it wasn’t worth the $7,000 of legal fees. 

ABOUT THE AUTHOR

Indranil Chowdhury


Indranil is a Med school student and an avid gamer. He puts his absolute faith in Lord Gaben and loves to write. Crazy about the Witcher lore, he plays soccer too. When not playing games or writing, you can find him on 9gag spreading the Pcmasterrace propaganda.
Back to top button