SSL/TLS is a key part of the security infrastructure for the Internet. It’s embedded in every web browser and encountered by most web users as the mysterious browser lock symbol. SSL/TLS is formed from a suite of protocols, including (most importantly) the Handshake Protocol and the Record Protocol. The former uses primarily asymmetric cryptographic methods to perform authentication and to set up secure session keys between a client and a server; the latter then uses efficient symmetric cryptographic methods to encrypt and integrity-protect all the data transferred between the client and server in the ensuing session.
However, SSL/TLS includes a myriad of options that can, at first, seem impenetrable to newcomers: there are different versions of SSL and TLS, all with subtle differences; there are over 300 different cipher suites defining exactly which cryptography gets used in the protocol; there are options for renegotiating the ciphersuite and for quickly resuming existing sessions; and there are dozens of extensions to the basic protocol suite (including, for example, the now-notorious Heartbeat protocol that was implicated in the Heartbleed vulnerability). While the SSL/TLS protocol is defined in a sequence of IETF specifications, there is also a rich collection of different implementations, some buggier than others. And then there is the whole CA and PKI eco-system that exists to enable SSL/TLS authentication to be bootstrapped.
BEAST opens the floodgates for SSL/TSL attacks
For around a decade, up until 2011, research on SSL/TLS was sporadic – from time to time, there would be a research paper providing a forma analysis, or a bug found in an implementation. In 2011, things started to change. Duong and Rizzo announced the BEAST attack. This took a known theoretical weakness in the way in which SSL 3.0 and TLS 1.0 handled initialisation vectors for CBC-mode encryption and turned it into a practically-demonstrable attack targeting HTTP secure cookies. Cookies are a basic security mechanism used in managing secure sessions between web browsers and servers; that an attack could recover them was big news. Fortunately, TLS 1.1 and 1.2 already existed and had countermeasures against BEAST. Unfortunately, no mainstream browser and fewer than 5% of servers supported TLS 1.1 or 1.2 at the point when BEAST was unveiled.
The hits keep coming
BEAST opened the floodgates for SSL/TLS attacks. Duong and Rizzo themselves followed up with CRIME in 2012, an attack against the optional use of data compression in SSL/TLS. The academic cryptographers (like me) had been paying attention, learning from BEAST and CRIME, and honing their own weapons. In 2013, SSL/TLS had its annus horriblis: this was the year of Lucky 13 and the RC4 attacks.
Lucky 13 showed that an old padding oracle attack due to Vaudenay had not been properly fixed in subsequent patches to the protocol specifications, leaving all CBC-mode cipher suites still vulnerable to a timing attack. While the new attack could be patched against, the difficulty in developing a rock-solid fix meant that, with just one exception, only the RC4 stream cipher was left standing amongst all the encryption options for SSL/TLS. (The exception was AES-GCM, only usable in TLS 1.2. And TLS 1.2 was barely deployed.)
RC4 was not a safe alternative
Fast forward to Spring 2015 (skipping over 2014, another excruciatingly bad year for SSL/TLS, with Heartbleed and POODLE as the lowlights). As a result of BEAST, Lucky 13 and the RC4 attacks: TLS 1.2 is now available in all major browsers; AES-GCM usage is on the rise; and the IETF has finally issued RFC 7465, prohibiting RC4 cipher suites.
So what’s not to like?
Even now, roughly 30% of all SSL/TLS traffic is still protected by RC4, according to the ICSI Certificate Notary project. And, from SSL Pulse, we learn that 74.5% of sites still allow negotiation of RC4. Even worse, a January 2015 survey of about 400,000 of the Alexa top 1 million sites show that 8.75% of sites force the use of RC4 in TLS 1.1 and 1.2, where better options are available.
So, RC4 is still alive and kicking on the Internet. Why? We may speculate that the IETF prohibition has yet to kick in. Admins of large websites are loathe changing their SSL/TLS configurations – no one wants to lose any site visitors. And the headline numbers for the existing attacks on RC4 are just not that threatening – 2^34 is a pretty big number after all.
Tackling the RC4 problem – our big announcement
With this context in mind, we decided to see how much further we could push the earlier RC4 attacks, to bring down the time and data complexities to less gargantuan proportions. Our aim was to make the continued use of RC4 indefensible and provide another illustration – should one really be needed – that attacks can only get better. This is the content of the new work that we are announcing today. We threw everything we had at the problem:
– we used a more sophisticated statistical analysis;
– we focused on passwords as the target, enabling us to exploit information about password distributions to boost the attack;
– we used more powerful and detailed information about early biases in RC4 keystreams;
– we exploited the fact that sites typically allow many trials before locking out users;
– we found applications that automatically retransmit passwords over SSL/TLS, and do so sufficiently early in the RC4 keystream; and
– we built two proofs of concept, one for BasicAuth, the other for IMAP, showing the new attacks are on the verge of feasibility.
The end result of all this? We can recover RC4-protected passwords with a 50% success rate using 2^26 encryptions, instead of the 2^34 needed previously to recover a secure cookie. That’s a two-order of magnitude reduction over the previous attack. (In more detail, this is for BasicAuth, with a length 6 password, for the Chrome browser, and with the attacker being allowed 100 guesses before lock-out; for the complete details, see our technical paper.)
RC4 was already looking nervously towards the cliff-edge. Our work pushes RC4 a significant step closer, leaving it teetering on the brink of oblivion for SSL/TLS. After all, attacks can only get better…