3aIT Blog

A man trying to decrypt a paper documentMeta (the owners of Facebook) have outlined plans to encrypt all Facebook Messenger conversations by default next year. This has not gone down well with some governments around the world, including the UK. In this analysis blog, we take a look at some of the challenges in this highly complex issue.

There's a few high profile examples on this theme at the moment. To cover the specific case here first, Meta plan to encrypt all Messenger conversations on Facebook so that only the devices at either end of that conversation can read them. While the conversation data is stored on Meta's servers, it's stored in a way that makes it impossible for someone without the decryption key to read. Only the devices at either end have that key, so Meta (or other 3rd parties, including governments) cannot read those messages even if they wanted to. This is how other messaging platforms (including the Meta-owned WhatsApp) already work.

So what's the problem? Well, according to the UK home secretary, doing this would be a “grotesque betrayal” if the company didn’t consider issues of child safety while introducing E2EE. In other words, it will make it much harder for authorities to catch anyone using this plaform for the dissemination of of child sexual abuse material (CSAM), as their messages will now be unreadable to those authorities.

A door chained with a padlockClearly the solution then is to design some sort of back door into the encryption then so that trusted entities can gain access to the data in exceptional circumstances, right? Unfortunately, this isn't really an option. Encryption is either total, or effectively useless. If there's a way for a 3rd party you trust to access it, then there's a way for a 3rd party you don't trust to access it.

The potential to share clearly harmful and illegal content as mentioned above is often cited as the reason not to increase privacy for the average end user. Certainly, were it just cases like this and other very obvious issues like terrorism that were in question here, there would be no discussion to be had. Everyone would agree that if there were a way to somehow flag just those things and alert the authorities, that should be done.

However, the question here becomes where is the line drawn, and who has the authority to search your messages? The examples above are clear cut. However, there's plenty of authoritarian regimes around the world that would persecute people discussing all sorts of things that most of the world would disagree with them on. Who gets to decide which keywords, images and videos local authorities are allowed to scan for if they have access to your messages?

Even when measures are put in place to automatically scan for clearly objectionable material, things are not as simple as those introducing these protections make it sound. A year ago, we wrote another blog relating to Apple's plans to start proactively scanning users' phones for CSAM images. The plan was actually put on ice shortly after for some of the reasons touched on in that article.

A woman hiding her faceWhile these sorts protections seem fine in theory, real life is often more nuanced than an algorithm can handle, as was demonstrated recently.

Google recently flagged a user as a criminal due to what it deemed CSAM images automatically uploaded to his Google cloud from his phone. However, the photos of his infant son had been taken for medical reasons to send to a doctor. Google's automated software scanned these images, and his details were ultimately handed to the police to investigate. His whole Google account was suspended, and this suspension was upheld by Google on review. This meant he lost access to all his emails, contacts, and even his phone number. Even once the police had investigated and found no crime had been committed, Google still refused to restore his account.

So, while campaigners on both sides of this argument often suggest the solutions are simple, unfortunately, this is rarely the case. If you make one platform untenable for people breaking the law, they'll just move to another one. If you automate a system to scan for photos of child abuse, innocent people will get caught in the same net. Of course, some may well argue that if these automated processes catch a large enough number of real examples of the material they're scanning for, a couple of innocent people getting accidentally implicated in the process is a price worth paying. The battle for where to draw the lines here will continue to rage for some time.