Google scans personal photos to aid in the investigation of child abuse.

This has been a practice of the internet for some time, but we are now being told that every one of our online habits should be monitored. Doing so will allow law enforcement to more effectively investigate crimes related to child sexual abuse images.

There’s no information on what the public gets watched by tech companies and government agencies. There are mistakes and these mistakes can lead to false accusations.

Technology companies should work on new methods to combat CSAM. We have proposed some methods- building better reporting tools, privacy-respecting warning messages, and metadata analysis.

An article published in the New York Times yesterday reports two of Google’s false accusation and police follow up. The article also highlights Google’s refusal to correct any of the damage done by its erroneous scans, and their failed human review process. This type of scanning is increasingly ubiquitous on products we all use, and governments around the world want to extend its reach even further, to check even our most private, encrypted conversations. The article is especially disturbing not just for the harm it describes to the two users Google falsely accused, but also as a warning of potentially many more such mistakes come.

Google’s AI failed to spot what was and wasn’t child abuse.

Google’s algorithms falsely flagged images of children’s infected genitals as photographs of child abuse.

Without informing either parent, Google reported them to the government. This led to local police departments investigating the parents.

Google handles these situations by reviewing the image, as well as all other similar photos taken by the person they were flagged.

Mark’s account was erroneously suspended by Google. The SFPD determined that no crime had been committed, and Mark contacted Google for help in getting his account back. Google refused to hear Mark’s appeal or reinstate his account even after he brought evidence from the SFPD.

How a Google mistake led to false accusations of child abuse

Google has the right to choose who it wants to host on their platforms. However, in these cases, Google’s algorithms led to innocent people being investigated by the police. Moreover, Google also destroyed some user’s email accounts without warning, not taking into consideration due process. The repercussions of this error cannot be ignored.

Google has wrongly accused of child abuse many more people than these two. It is likely there are thousands, or even hundreds of others. The massive scope of the content scanned means that it may be problematic for a long period of time to come.

Two fathers were wrongly accused by Google within one day of each other in February 2021. Either the AI in Google’s system may have made a mistake, or there may have been a human error in their review process, and this issue is particularly prevalent at that time period.

Google scans have hurt innocent people because they have a bias. The police might find incriminating evidence, and then arrest em for unrelated crimes. Google also punished Mark and Cassio without any suspicion of wrongdoing.

Implementing an AI would put low-income people, who already fear government agencies, in danger of having their children taken away by the system.

No Comment from Google and Governments on the Scans

Social media sites such as LinkedIn use two different methods to identify content. One way is through PhotoDNA, which matches certain types of content and warns users about the risks if they continue uploading or sending images in messages. Another way is by comparing new accounts to previously analyzed accounts and identifying certain types of content that may be false.

Public outcry has lead to the scuttling of Apple’s proposed AI that would search for matches to criminals. This year, Congress passed a bill which opens the door for states to compel companies to use CSAM scanners. The EU is considering CSAM detection law, with an addition of analyzing text messages for “grooming,” in order to judge potential abuse.

EU commissioner claims that the proposed scanners will detect an accuracy rate of 90%. She even says that it will be able to spot grooming before human review.

If the EU uses a false positive rate of over 90% for scanning messages, it will result in millions of incorrectly flagged messages. This avalanche of incorrect flags will create an untolerable humanitarian disaster in even wealthy democracies with rule of law. Free speech rights will be the first casualty when these systems are installed in any country. Proponents argue that these systems save lives and are worth the collateral damage–overestimating that the number of errors is low.

When companies like Google and Apple are constantly scanning our digital spaces, government wants the companies to also scan private regions on the phone. Not only does this include family life, but also police investigations. When in doubt, the company is second guessing their own users.

The Solution to Restricting the Scans of Our Private Photos

The Electronic Freedom Foundation has been fighting to stop spying on people’s digital lives for over 30 years. The foundation believes that when the police want to look at our private messages or files, they should follow the 4th Amendment and get a warrant.

As for private companies, they should be working to limit their need and ability to trawl our personal content that is encrypted. When we have private conversations with friends, family, or medical professionals, they should be protected using end-to-end encryption. In end-to-end encrypted systems, the service provider doesn’t have the option of looking at the message, even if they wanted to. Companies should also commit to encrypted backups, something EFF has requested for some time now.

The answer to creating a better Internet is not about coming up with more powerful scanning software, but rather about strong encryption. It is impossible for human rights and AI protection to coexist in systems that monitor messages on social media and elsewhere. Law enforcement and elected leaders need to protect privacy and trust encryption instead of breaking it down.

Recent Articles

Related Stories

Stay on op - Ge the daily news in your inbox