Facebook Messenger

Facebook on Friday said it's extending end-to-end encryption (E2EE) for voice and video calls in Messenger, along with testing a new opt-in setting that will turn on end-to-end encryption for Instagram DMs.

"The content of your messages and calls in an end-to-end encrypted conversation is protected from the moment it leaves your device to the moment it reaches the receiver's device," Messenger's Ruth Kricheli said in a post. "This means that nobody else, including Facebook, can see or listen to what's sent or said. Keep in mind, you can report an end-to-end encrypted message to us if something's wrong."

The social media behemoth said E2EE is becoming the industry standard for improved privacy and security.

Cybersecurity

It's worth noting that the company's flagship messaging service gained support for E2EE in text chats in 2016, when it added a "secret conversation" option to its app, while communications on its sister platform WhatsApp became fully encrypted the same year following the integration of Signal Protocol into the application.

In addition, the company is also expected to kick off a limited test in certain countries that lets users opt-in to end-to-end encrypted messages and calls for one-on-one conversations on Instagram.

The moves are part of Facebook's pivot to a privacy-focused communications platform the company announced in March 2019, with CEO Mark Zuckerberg stating that the "future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won't stick around forever."

The changes have since set off concerns that full encryption could create digital hiding places for perpetrators, what with Facebook accounting for over 90% of the illicit and child sexual abuse material (CSAM) flagged by tech companies, while also posing a significant challenge when it comes to balancing the need for preventing its platforms from being used for criminal or abusive activities while also upholding privacy.

The development also comes a week after Apple announced plans to scan users' photo libraries for CSAM content as part of a sweeping child safety initiative that has been subject to ample pushback from users, security researchers, the Electronic Frontier Foundation (EFF), and even Apple employees, prompting concerns that the proposals could be ripe for further abuse or create new risks, and that "even a thoroughly documented, carefully thought-out, and the narrowly-scoped backdoor is still a backdoor."

Cybersecurity

The iPhone maker, however, has defended its system, adding it intends to incorporate further protections to safeguard the technology from being taken advantage of by governments or other third parties with "multiple levels of auditability," or reject any government demands to repurpose the technology for surveillance purposes.

"If and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images, and at that point, only knows about those images, not about any of your other images," Apple's senior vice president of software engineering, Craig Federighi, said in an interview with the Wall Street Journal.

"This isn't doing some analysis for did you have a picture of your child in the bathtub? Or, for that matter, did you have a picture of some pornography of any other sort? This is literally only matching on the exact fingerprints of specific known child pornographic images," Federighi explained.


Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.