Breaking https:// with Bar Mitzvah

5

Shamir (The S of RSA) once predicted that the use of Stream Ciphers, like RC4, would inevitably decline over time. Stream Ciphers, while less secure than block ciphers, are faster and simpler to implement than block ciphers. However, with the increasing processing power of computers and the increasing understanding and adoption of Block Ciphers, Stream Ciphers are fast on the decline. Additionally, the weaknesses of Stream Ciphers like RC4 have also left administrators rushing to replace it with Block Ciphers.

1. Implications of Bar Mitzvah

Recently a new discovered attack on an RC4 implementation of TLS, dubbed Bar Mitzvah, may be the final nail in the coffin for RC4 and also for TLS versions 1.0 and 1.1. Whilst the RC4 weaknesses exploited in this attack were first published two years ago, a practical attack wasn’t feasible until a security researcher, Itsik Mantin from Imperva, discovered Bar Mitzvah.

2. RC4 basics

The RC4 algorithm is a Stream cipher that encrypts each plaintext value with a pseudorandom string of bits, known as the keystream. To generate the keystream, a secret key is combined with an initial array of values to form a permuted state, which then goes through more permutation before it is XOR’ed with the plaintext to create the ciphertext. To increase the randomness of the first permuted state, the combination of the key and initial value is done one character at a time and the result is used in the combination of the next value and so on. As the length of the initial array is fixed (e.g. at 256 bytes) and the key length is variable (e.g. may only be 128 bytes), the key is repeated to make up the bytes of the array.

The RC4 algorithm is here (from Wikipedia)

3. RC4 weaknesses

The RC4 algorithm has a few known weaknesses. Due to the way RC4 is initialised, there are single byte biases found in first few bytes of the keystream where the probability of a certain value occurring is statistically higher than average. Another problem is the invariance weakness, which is a key pattern that appears when using weak RC4 keys. This makes its possible to detect specific patterns in the cipher when weak keys are used (e.g. the Least Significant Byte of weak 16 byte keys have statistical biases e.g. even number/multiple of four have a higher probability of occurring etc.)

4. RC4 attacks

RC4 weaknesses have been used to break encryption before. WEP (Wired Equivalent Privacy) used small-sized fixed length (40 bit) keys that were shared across internal networks and never changed. Additionally it also concatenated a small-sized (24 bit) Initialisation Vector (IV) before the key. This made WEP a prime candidate for brute force attacks and a reason why the new Wireless standards enforce the use of AES block ciphers.

5. Bar Mitzvah

The recent Bar Mitzvah attack exploits the fact that in TLS, the first encrypted bytes include predictable information related to the SSL handshake. Therefore, you can use plaintext to compare against the keystream pattern of a known weak key to check if any ciphertext pattern emerges. This requires many encrypted sessions to complete, which may be possible if users have Javascript malware running on their browser, sending multiple requests to the server.

For more information

https://www.youtube.com/watch?v=KM-xZYZXElk
https://www.blackhat.com/docs/asia-15/materials/asia-15-Mantin-Bar-Mitzvah-Attack-Breaking-SSL-With-13-Year-Old-RC4-Weakness-wp.pdf

Advertisements

Breaking https:// with POODLE. How does it work?

2887376255_990a35a89d

This is an introduction to some basic concepts around how POODLE (Padding Oracle on downgraded legacy encryption) works. There are plenty of other blogs/videos that go into greater detail about how it works but the basics can help to provide a framework to navigate through the detail. 

Basically POODLE discovered that it was possible to decrypt some parts of encrypted SSL sessions via a man-in-the-middle. A victim can be vulnerable when using public wifi or if they have some nasty malware on their computers. 

1. Basics of Cipher Block Chaining 

During the SSL handshake, symmetric keys are exchanged to encrypt sessions. Sessions encrypted via the Cipher block chaining method are susceptible to what is known as a padding oracle attack. CBC is a method of symmetric block cipher cryptography. In CBC, a message is broken into 3 blocks of equal size blocks (eg 8 bit blocks). Each plaintext block is encrypted sequentially until you end up with 3 blocks of ciphertext. Before each block is encrypted, it is XOR’ed with the previous ciphertext block (the first block is XOR’ed with a block of random bytes known as IV) . To decrypt, this operation is reversed e.g.  Encrypted block is XOR’ed with the previous ciphertext block and then decrypted. This operation results in a very random block of ciphertext being produced every time. It is almost impossible to break however….

2. Padding in CBC

Most messages are not perfect block sizes of x bytes. Actually, messages may be of varying length. As block ciphers require the blocks to be of exact x bytes, extra padding (a string of random bytes) is added to fill up any unused bytes. The length of the padding is also stored as the last byte of the encrypted block. 

3. How padding checks can weaken security

Once the message is decrypted, servers undergo two checks. The first is to validate the padding length. If the padding length doesn’t match the actual padding, this will result in an error. The second check is a MAC on the encrypted block. This check verifies that the encrypted block hasn’t been altered during the transition.

This poses a problem. As the padding is checked before the MAC checks, a man-in-the-middle can intercept the message and try to guess what the padding length is. In only 255 guesses, he/she will be able to decrypt the last byte. 

4. Chosen ciphertext attack

If you recall, encrypted blocks are XOR’ed with the previous ciphertext block before it is decrypted. Therefore, you can substitute the last byte of the previous ciphertext block as many times as needed until the padding length is valid. Once that happens, you know you successfully guessed the padding length and cracked the last byte. You can also continue to decrypt more bytes using similar methods. 

5. What next? 

To stop padding oracle attacks, the server can provide more validation around the padding and also ensure that error messages don’t specify whether a failed session is caused by bad padding or a bad MAC.  Unfortunately, SSLv3 implementations don’t do this, which is why users should disable SSLv3. Whilst TLS does, on certain TLS implementations (e.g. TLSv1.0- TLSv1.1), a padding oracle may still be possible if there is significant time difference between sessions failing due to bad padding vs bad MAC. This can occur in certain server setups (e.g. when load balancing is used)

Want more information?

https://blog.skullsecurity.org/2013/padding-oracle-attacks-in-depth

http://www.limited-entropy.com/padding-oracle-attacks/

5 Lessons from the SHA-1 deprecation

When Microsoft announced that they will no longer accept SHA1 certificates from 1 January 2017, and Google said that they will start showing warnings as early as 2015, a cold sweat ran down the backs of IT operators across the world. This was a ticking time bomb, one that would require many wires to be carefully cut before services dropped dead come 2017. For those working in environments which may be infested with hundreds of these SHA1 instances (possibly hidden in legacy servers, clients and applications), this was going to be one messy clean-up exercise.

Even as you are busily working away all your SHA-1 dramas, know that you are not alone! We can get through this together. In fact, the greatest thing is that there are tons of support out there. So let us grab a drink (non-alcoholic if you are on-call) and recap over what we have learnt over the past couple of months.

  1. Cuz Microsoft hurts too…

The fact that the active deprecation of SHA1 is Microsoft led and that even the Certification Authorities were ill-prepared for this change, bought a lot of questions to mind. Was this a joke just to show us how powerful they are? Will Microsoft take it back in time? Unfortunately, this isn’t a joke and Microsoft are deadly serious.

What may have contributed to this is the Flame virus, discovered by Russian antivirus firm Kaspersky in 2012. Attackers performed a hash-collision on a weak md5 certificate to create a fake certificate. In doing so, they were able to impersonate Microsoft and distribute malware through their Update Service. This was used for spying and espionage on infected targeted systems in Iran, Lebanon, Syria, Sudan and the Israeli Occupied Territories for an unknown period (2-5 years potentially). Although this was a rare and highly sophisticated attack requiring massive amounts of computing power and one that is difficult for the standard attacker to replicate, it is fair to say that this is probably something Microsoft doesn’t want a repeat of.

  1. Microsoft and Google ARE almighty.

When Google announced that they were going to begin showing warnings as early as 2015, from the “Secure, but with minor errors” to the flat-out “Insecure” warnings, many of us wanted to boycott Google Chrome and tell our users to use another browser. However, after additional thought (30 seconds), this was replaced by a sigh of resignation. After all, Google Chrome do own a huge slice of the pie when it comes to market share. In Australia, they own the majority of market-share and they ARE trying to do the right thing.

Their view is that, as long as SHA-1 continues to be supported, there will be little work to deprecate SHA1. Even though the CA/Browser Forum’s Baseline Requirements recommended an upgrade to SHA-2 in 2011, CA’s were reluctant to stop issuing SHA-1 certificates due to market pressure. The transition from MD5 to SHA-1 took ages and caused many headaches for Google when they finally removed support for the algorithm. Therefore, the only way to give this the push it requires is for a browser-led initiative.

  1. When it rains, it storms

As if SHA-1 deprecation wasn’t enough for IT operators to deal with, some versions of OpenSSL were bleeding with Heartbleed while POODLE killed SSLv3. Then after some reprieve, FREAK came along to remind us that the rain never really stops. It was like being in the middle of a heart transplant, when fluid starts leaking into the lungs and then the liver fails. I will explore some of these attacks in more detail in my next post.

  1. Migrating to SHA-2 is painful

In complex environments, it may be difficult to discover all the SHA1 certificates out there. Especially if there are certificates issued by multiple External and Internal Certification Authorities. It can also take a long time to identify the support teams and businesses that own the domains. There are some certificate discovery tools that can be purchased from your CA (e.g. Symantec or Digicert both issue them). These scan the network for any SSL certificates (issued by any CA). A good discovery tool should be fast to implement and easy to set-up (they may also be able to detect misconfigured certificates or other vulnerabilities (e.g. BEAST).

While most modern and commonly used clients, devices and servers support SHA-2, there are legacy clients, devices, applications and servers that do not support SHA-2 and may require additional patching before the migration can occur. For example, Windows Server 2003 will require further patching. Windows XP running on anything less that Service Pack 3 will require an upgrade (even though XP should no longer be used). Some applications running on supported systems may not be able to validate SHA2 certificates (e.g. Outlook 2003). The true impact will not be known until you begin testing.

  1. Getting support and prioritization from the business is hard

Let’s be honest here, nobody really cares about the insecurities of SHA-1, not really. Especially since the attacks are still practically infeasible and will take huge amounts of computing power to achieve. The CA browser community didn’t care enough to do anything about it until Microsoft and Google posed their challenge, so why would businesses care? In a large organization, where change is slow and budgets are spliced, coordinating an effort as big as this one, in a short timeline, is suffice to say, difficult. Success will require a coordinated effort by IT Support, customer support, business application owners, managers and security to collaborate effectively. To get all these teams on board and motivated to take action, there needs to be strong buy-in. It is all in or nothing. Therefore, Microsoft setting a review of this, sometime in July this year to “assess” whether to go ahead or not sets us in limbo and makes it hard to gather appropriate prioritization and support. There can be no “yes this may happen but maybe it won’t” scenarios to play out. Teams are busy enough. What helps is to have a clear deadline. What doesn’t help is any ambiguity. Meanwhile, time is ticking…