Because of taking photos of his toddler for the doctor. Google flagged Dad as a criminal

After uploading photos of his son to Google Drive, Mark’s phone made a notification noise and he was notified that his account had been disabled because of “harmful content”. It is possible that the content he posted could have violated Google’s policies, or even be illegal.

Kashmir Hill tells why Google flagged him as a criminal

For reasons we will get into later, his wife scheduled an emergency consultation with the doctor the next morning. When she asked the nurse how they should send photos, they told her to simply send them through email.

Mark’s wife grabbed her husband’s phone and sent closeups of their son’s groin area to her iPhone to upload them.

When Mark was snared in an algorithmic net designed to find people exchanging child sexual abuse material, he lost 10 years worth of contacts, emails and photos. He is, however, now the target of a police investigation.

Technology companies have been pressured to examine what passes through their servers to combat sexual abuse imagery online. They are cooperating, but this can result in private archives being looked into that has cast innocent behavior in a sinister light in at least two cases The New York Times has uncovered.

Jon Callas, a technology expert from the Electronic Frontier Foundation, commented on the issue saying he felt these cases acted as canaries in the coal mine.

After his Gmail account in the mid-aughts, Mark, who is in his 40s, tended to use Google heavily. His Android smartphone camera backed up his photos and videos to the Google Cloud. He had a phone plan with Google Fi.

Mark’s account was disabled two days after he took photos of his son, which were flagged as child sexual abuse.

Man discovers the domino effect of getting rejected by Google. He lost his access to emails, contact information, and documentation with Google Fi shutting down and losing their phone numbers. This made it so they couldn’t get security codes to sign in to other Internet accounts.

Google has said they’re committed to preventing the spread of child sexual abuse material on their platforms.

The company did not give a reason for the decision.

Sharing sensitive images was difficult with Mark, who had to go through the time consuming process of asking his doctor to take pictures and then email them to him. With access to Google Photos on his Android, Cassio was able to take photos, back them up automatically and send them to his wife via chat.

How Image Recognition is Creating Unintended Risks

PhotoDNA was the first widespread solution to online child pornography and has been used to identify users circulating illegal content. This system has proven effective with over 200,000 of PhotoDNA’s unique codes protecting 150 billion images.

In 2018, Google developed an AI that could recognize unknown images of exploitative children. This means not just well-known images, but images of unknown victims that could potentially be rescued by authorities. Google made this technology available to companies such as Facebook.

Google scans the photos from a user’s phone when they back up the photo to their company’s cloud after it was flagged by AI. The human content moderator for Google reviews these photos and clears them if they meet the federal definition for child sexual abuse material. They lock the user’s account if they find such photos and make a report as required by law to the CyberTipline at the National Center for Missing and Exploited Children.

In 2021, the CyberTipline reported that it had alerted authorities to “over 4,260 potential new children victims.”

Mark received a letter from the San Francisco Police Department in December, informing him that he had been investigated. He got copies of the search warrants served on Google and his internet service provider along with a letter detailing the items wanted from him. Investigators wanted all of his data from his Google account: internet searches, location history, messages, and anything else on their servers.

His son’s pictures were taken in February, and the search related to child exploitation videos happened in the same week.

The case was closed when the investigator couldn’t contact Mark.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Hillard wrote in his report.

Mark tried to appeal his case again with Google, but there was no change in their decision.

Cassio was cleared of wrongdoing after showing the detective his communications with the pediatrician, but couldn’t get his old Google account back. He had been an active user for years, and had even paid for some services that Google provided.

I can see why they flagged the explicit photos of his son, but the context matters. The photos were taken by a parent to track their child’s inoculations.

“In an age of telemedicine and COVID it has been necessary for parents to take photos of their children in order to get a diagnosis,” said Claire Lilley, Google’s head of child safety operations. The company has consulted pediatricians so that its human reviewers understand possible conditions that might appear in photographs taken for medical reasons.

Google has stated that using Hangouts to send pictures to his wife was not allowed.

Google has not detected that photos or videos posted online by Mark have a harmful content.

Mark did not remember this video and no longer had access to it, but he said that it sounded like a moment worth capturing.

“I can imagine it,” Mark said “It was a beautiful morning and I wanted to capture the moment. It all could have been avoided if only we slept with our pajamas on.”

A Google spokesperson responded by saying the company will still stand by its decision, even though law enforcement cleared the two men.

Recent Articles

Related Stories

Stay on op - Ge the daily news in your inbox