SSL/TLS is (probably) the world’s most-used secure protocol, used almost universally for securing web connections and much more besides. Yet SSL/TLS has been coming under sustained pressure from attacks over the last few years.

In this post, Professor Kenny Paterson of Royal Holloway, University of London reports on the increasing focus on SSL/TLS in the research community, and introduces some new research re-evaluating the security of the RC4 cipher in SSL/TLS.



SSL/TLS is a key part of the security infrastructure for the Internet. It’s embedded in every web browser and encountered by most web users as the mysterious browser lock symbol. SSL/TLS is formed from a suite of protocols, including (most importantly) the Handshake Protocol and the Record Protocol. The former uses primarily asymmetric cryptographic methods to perform authentication and to set up secure session keys between a client and a server; the latter then uses efficient symmetric cryptographic methods to encrypt and integrity-protect all the data transferred between the client and server in the ensuing session.

However, SSL/TLS includes a myriad of options that can, at first, seem impenetrable to newcomers: there are different versions of SSL and TLS, all with subtle differences; there are over 300 different cipher suites defining exactly which cryptography gets used in the protocol; there are options for renegotiating the ciphersuite and for quickly resuming existing sessions; and there are dozens of extensions to the basic protocol suite (including, for example, the now-notorious Heartbeat protocol that was implicated in the Heartbleed vulnerability). While the SSL/TLS protocol is defined in a sequence of IETF specifications, there is also a rich collection of different implementations, some buggier than others. And then there is the whole CA and PKI eco-system that exists to enable SSL/TLS authentication to be bootstrapped.

BEAST opens the floodgates for SSL/TSL attacks

For around a decade, up until 2011, research on SSL/TLS was sporadic – from time to time, there would be a research paper providing a forma analysis, or a bug found in an implementation. In 2011, things started to change. Duong and Rizzo announced the BEAST attack. This took a known theoretical weakness in the way in which SSL 3.0 and TLS 1.0 handled initialisation vectors for CBC-mode encryption and turned it into a practically-demonstrable attack targeting HTTP secure cookies. Cookies are a basic security mechanism used in managing secure sessions between web browsers and servers; that an attack could recover them was big news. Fortunately, TLS 1.1 and 1.2 already existed and had countermeasures against BEAST. Unfortunately, no mainstream browser and fewer than 5% of servers supported TLS 1.1 or 1.2 at the point when BEAST was unveiled.

Duong and Rizzo would be the first to admit they were not cryptographers – rather, they came from the hacking community. But this gave them a unique edge over standard-issue academic cryptographers: they understood how SSL/TLS gets used in browsers and how browsers handle cookies and Javascript. They put all that knowledge to use in their attack, notably producing a slick YouTube video showing the step-by-step execution of their attack against PayPal.

The hits keep coming

BEAST opened the floodgates for SSL/TLS attacks. Duong and Rizzo themselves followed up with CRIME in 2012, an attack against the optional use of data compression in SSL/TLS. The academic cryptographers (like me) had been paying attention, learning from BEAST and CRIME, and honing their own weapons. In 2013, SSL/TLS had its annus horriblis: this was the year of Lucky 13 and the RC4 attacks.

Lucky 13 showed that an old padding oracle attack due to Vaudenay had not been properly fixed in subsequent patches to the protocol specifications, leaving all CBC-mode cipher suites still vulnerable to a timing attack. While the new attack could be patched against, the difficulty in developing a rock-solid fix meant that, with just one exception, only the RC4 stream cipher was left standing amongst all the encryption options for SSL/TLS. (The exception was AES-GCM, only usable in TLS 1.2. And TLS 1.2 was barely deployed.)

RC4 was not a safe alternative

In view of the BEAST and Lucky 13  attacks, many commentators started to recommend the use of RC4. So that’s where the academic community turned next. A group of us showed that RC4 was not safe either: given a very large number of encryptions of a secure cookie, generated using the same mechanism as used in BEAST and CRIME, that cookie could be inferred from a statistical analysis of all the ciphertexts. Here, very large really means huge: 234 or so encryptions – taking roughly 2000 hours of on-line work by an attack JavaScript running in the victim’s browser and creating many Terabytes of network traffic.

Fast forward to Spring 2015 (skipping over 2014, another excruciatingly bad year for SSL/TLS, with Heartbleed and POODLE as the lowlights). As a result of BEAST, Lucky 13 and the RC4 attacks: TLS 1.2 is now available in all major browsers; AES-GCM usage is on the rise; and the IETF has finally issued RFC 7465, prohibiting RC4 cipher suites.

So what’s not to like?

Even now, roughly 30% of all SSL/TLS traffic is still protected by RC4, according to the ICSI Certificate Notary project. And, from SSL Pulse, we learn that 74.5% of sites still allow negotiation of RC4. Even worse, a January 2015 survey  of about 400,000 of the Alexa top 1 million sites show that 8.75% of sites force the use of RC4 in TLS 1.1 and 1.2, where better options are available.

So, RC4 is still alive and kicking on the Internet. Why? We may speculate that the IETF prohibition has yet to kick in. Admins of large websites are loathe changing their SSL/TLS configurations – no one wants to lose any site visitors. And the headline numbers for the existing attacks on RC4 are just not that threatening – 2^34  is a pretty big number after all.

Tackling the RC4 problem – our big announcement

With this context in mind, we decided to see how much further we could push the earlier RC4 attacks, to bring down the time and data complexities to less gargantuan proportions. Our aim was to make the continued use of  RC4 indefensible and provide another illustration – should one really be needed – that attacks can only get better. This is the content of the new work  that we are announcing today. We threw everything we had at the problem:

– we used a more sophisticated statistical analysis;

– we focused on passwords as the target, enabling us to exploit information about password distributions to boost the attack;

– we used more powerful and detailed information about early biases in RC4 keystreams;

– we exploited the fact that sites typically allow many trials before locking out users;

– we found applications that automatically retransmit passwords over SSL/TLS, and do so sufficiently early in the RC4 keystream; and

– we built two proofs of concept, one for BasicAuth, the other for IMAP, showing the new attacks are on the verge of feasibility.

The end result of all this? We can recover RC4-protected passwords with a 50% success rate using 2^26  encryptions, instead of the 2^34  needed previously to recover a secure cookie. That’s a two-order of magnitude reduction over the previous attack. (In more detail, this is for BasicAuth, with a length 6 password, for the Chrome browser, and with the attacker being allowed 100 guesses before lock-out; for the complete details, see our technical paper.)

RC4 was already looking nervously towards the cliff-edge. Our work pushes RC4 a significant step closer, leaving it teetering on the brink of oblivion for SSL/TLS. After all, attacks can only get better…