Although initial investigation appears to indicate that no encryption was used to plan the terrible attacks in Paris, the attacks have given the policy conversation around encryption and commercial products new life. Many security officials would like to see backdoors built into commercial products that would allow security agencies to monitor communication. Technology companies and privacy advocates disagree.
Many security officials argue that encryption makes it difficult-to-impossible to monitor the online conversations that could indicate an attack was going to take place. For instance, as reported by Ars Technica, immediately following the attacks former CIA Deputy Director Michael Morell said:
I think what we’re going to learn is that these guys are communicating via these encrypted apps, this commercial encryption, which is very difficult or nearly impossible for governments to break, and the producers of which don’t produce the keys necessary for law enforcement to read the encrypted messages.
Businesses, such as Microsoft, disagree. Along with many other companies, Microsoft signed a letter arguing for reform in government surveillance methods following the Snowden revelations. Apple also signed that letter, and in October 2015, its CEO Tim Cook argued against creating backdoors into products that would allow the U.S. government to monitor communication.
The issue of encryption and terrorism is complex, and like the debates about the 2015 Cybersecurity Information Sharing Act, considering the relationship between terrorism and encryption highlights the inherent tension between security and privacy when considering cybersecurity. Indeed, privacy advocates in the U.S. point out that the wholesale gathering of data that often goes hand-in-hand with the concept of backdoors is hard to reconcile with the Fourth Amendment to the U.S. Constitution.
Encryption does not just protect the privacy of normal people, it also protects activists who may or may not be organizing in repressive contexts. Setting aside the privacy concerns of ordinary citizens, creating backdoors into products for the U.S. government leaves individuals who may be hiding their identity for valid reasons vulnerable.
PlayStation 4 as Scapegoat
Immediately in the wake of the Paris attacks, some British news outlets (The Daily Mail, The Mirror, The Telegraph, and The Express) reported that the attackers had used PlayStation 4’s (PS4) communication tools to plan the attack. Their claim was based on an International Business Times article that speculated that PS4s could have been used for communication, as one was found in the law enforcement raids following the attacks. PS4s then became an example of encryption’s potential to prevent security agencies from discovering terrorist plans in advance.
Many of these early articles claimed that conversations using PS4s were impossible to monitor because the communication tools are encrypted. Unfortunately, this is not true. Conversations using the PS4 network, both text and voice, may be difficult to monitor because of their ephemeral nature—but they are not encrypted. As reported by Ars Technica and Eurogamer, this claim seems to have come from Belgium’s deputy prime minister Jan Jambo’s unsubstantiated claims about the potential of PlayStations being used to plan attacks prior to the attacks in Paris.
The idea that PS4s could have been used to plan in the attack made for a sexy headline and was likely given legs by the discussion in the wake of Snowden’s revelations that there was NSA surveillance in World of Warcraft, a popular online video game, and also on Xbox Live, one of the most used video game platforms in the world. In the wake of this information being released, both Microsoft and Blizzard stressed that any surveillance would have been done without their knowledge or permission.
However, if PS4s were not used, what was? European officials have said that some form of encrypted communication was likely involved in the planning of the attack. Security expert Peter Sommer argues that ISIS is unlikely to be using any mainstream communication tools. In line with this, Ars Technica reports that ISIS is known to use Telegram—a messaging service that allows users to both encrypt and place a destruct timer on messages. Ars Technica has also reported in the past on Al-Qaeda’s use of steganography to hide unencrypted secret documents in a pornography video.
In the case of the Paris attacks, the attackers seem to have been using unencrypted SMS on their smart phones. If this proves to be true, then existing methods for detecting these attacks could have been used.
The problems with backdoors
Cybersecurity experts, such as Kim Zetter, have outlined many of the issues with security agencies highlighting encryption as the problem for tracking terrorists. She argues that mandatory backdoors don’t combat home-brewed encryption or non-U.S. created products and that there are other ways to get information. Zetter points out that encryption doesn’t obscure the metadata that allow security agencies to know who is communicating and that building backdoors makes everyone vulnerable. Zetter also points out that the sticking point for security organizations hasn’t seemed to be that the authorities didn’t have open access to the communications of potential terrorists, but rather that they didn’t know who was a potential threat—even in the Paris case where Turkish authorities warned French authorities twice about one of the attackers.
There are two additional issues to mention when considering backdoors. First, the tools that terrorists may use to communicate are the same tools that ordinary people use. Whether ISIS is using Facebook or Telegram, the major issue for security officials is that there is such a plethora of communication tools, monitoring all of them is basically impossible. To combat this reality, security agencies would like to be able to listen in on any conversation happening. The major issue for everyday citizens, then, is that attempts to monitor all communication for the purposes of security will involve a significant loss of privacy. For some, this trade off between privacy and security may be acceptable, but for others, it is not.
Second, the tools that terrorists use to communicate and hide that communication may also be tools that activists use in repressive regimes—or even in places that aren’t repressive, where activists are exercising their legal rights. Therefore, backdoors into communication tools create the potential for vulnerable activists to be revealed.
It is unsurprising that terrorists would learn to effectively use digital tools. Ethan Zuckerman’s “Cute Cat Theory” posits that every social media tool that works well will end up being used by activists. He argues that once activists are using these tools they become the target of regimes and actors wanting to stop the activists. He asserts that shutting down social media—as the Tunisian government did when it blocked the popular video site DailyMotion in 2007—didn’t just impact activists, but had widespread impact on normal users, who then learned to circumvent the ban.
In line with Zuckerman’s theory, Sony responded to the claims that PS4s were used in planning the attacks by framing the issue as one based on the double-edged sword of modern day information and communication technologies:
PlayStation 4 allows for communication amongst friends and fellow gamers and, in common with all modern connected devices, this has the potential to be abused. However, we take our responsibilities to protect our users extremely seriously and we urge our users and partners to report activities that may be offensive, suspicious or illegal. When we identify or are notified of such conduct, we are committed to taking appropriate actions in conjunction with the appropriate authorities and will continue to do so.
It stands to reason that if activists and ordinary people figure out how to use common commercial products to communicate out of the sight of governments, terrorists and criminals will as well. This leaves us collectively with the same dilemma that characterizes many cybersecurity issues—what privacy are we willing to give up for a theoretical gain in security and who are we willing to put at risk for that same security?