Finding security issues in a website (or: How to get paid by Google)


I received a payment over $2,500 from Google today. Now the conspiracy theorists among you can go off and rant in all forums that Adblock Plus is sponsored by Google and can no longer be trusted. For those of you who are still with me: the money came though Google’s Vulnerability Reward Program. Recently Google extended the scope of the program to web applications. I took up the challenge and sure enough, in a few hours I found four vulnerabilities in various corners of

Now to make this clear: Google has a very capable security team with great response times (yes, Yahoo!, I am looking at you). They have proper security review processes in place and generally the security of their web applications is pretty good. If you go after their popular applications like search or Gmail or YouTube you will pretty soon discover that you need to invest more time than the bug bounty justifies. However, if you look around on you will notice that it is home to many more web applications, most of which are rarely looked at. And guess what: finding vulnerabilities in these moldy corners is a lot easier. It probably won’t stay this way but right now Google seems to be overpaying for the vulnerabilities found.

And that is the first lesson of web security: you cannot invest into securing one application but ignore others. If you know that one application is less secure, at least move it to a different domain where it cannot be used to compromise other applications (at least as XSS goes). Which still might turn out badly if a security vulnerability in that application allows an attacker to compromise the server.

It so happened that each of the four vulnerabilities I found is different but each is typical in some way. I’ll describe them here as an example of what can go wrong in web development. Who knows, maybe it helps somebody to prevent making the same mistake.

The classic XSS: search field

Imagine for a moment that you opened up a website and want to get an idea about whether it was built with security in mind. How do you check that? Right, you go for the search field and type something like test<>"' into it. Check the source code of the result page, did the website turn your input into test&lt;&gt;&quot;&#39; or did it leave the “dangerous” characters unchanged? Chances are pretty good that it will be the latter and then your next input might be something like <script>alert("I am evil")</script>. Yes, that webmaster is a noob and XSS’ing the site is trivial.

Now Google has been in the search business long enough and I would have imagined that they learned securing (and testing) search fields as the very first thing. So when I entered a search string into YouTube Help I didn’t really expect an XSS vulnerability, I was rather interested in seeing how the application works. But — the “Results in YouTube Help for …” message quoted my input unescaped. I don’t know how this passed security reviews, the only explanation I have: somehow this didn’t affect the English variant of the page, so maybe only the English variant was tested. Anyway, that issue was too obvious and someone managed to find it before me — this is the only report I didn’t get any money for.

The usual advise to deal with XSS is to escape dangerous characters (<, >, ", ') every time when you insert user input into HTML code. However, doing this “manually” isn’t a good idea, it is much too easy to forget calling the escaping function somewhere. Which is why some template frameworks (I would even say: every good template framework) allow to do the escaping automatically every time a variable is inserted into the template. Jinja2 for example allows you to turn on autoescaping by default and then only turn it off for some variables which contain HTML code but have been verified as safe.

More complicated XSS: JavaScript attributes

The second vulnerability was also found in help search though in a different area. When searching for privacy<>"' some links on the resulting page had an attribute like onclick="RecordResultClick('privacy&lt;&gt;&quot;&#39;')". At the first glance this looks correct. All dangerous characters have been escaped, so you cannot use your input to break out of the attribute or the JavaScript string. That is, until you remember that in HTML the entities are evaluated first to get the attribute value, and that attribute value is the JavaScript code that will run. In the case above the attribute value is RecordResultClick('privacy<>"'') so clicking the link will result in a syntax error — the string has two single quotation marks at the end. And with input like '+alert(/evil/)+' you can actually run your JavaScript code when the link is clicked.

How does one protect against something like this? In this case you insert text into a JavaScript string that is inside an HTML attribute, so you first have to escape characters that are dangerous inside a JavaScript string (', backslash, newline  need to be replaced by \', \\, \n respectively) and then additionally escape HTML entities. Which is complicated enough that people often get it wrong. So I would generally advise against generating JavaScript attributes dynamically, just don’t do it. Here is an alternative that is much easier to secure: _query="privacy&lt;&gt;&quot;&#39;" onclick="RecordResultClick(this.getAttribute('_query'))".

HTTP Response Splitting

HTTP Response Splitting is a very common and frequently underestimated issue. Max Kanat-Alexander blogged about it recently so I don’t need to explain it all over again. However, he failed to mention that this issue is most commonly found in scripts doing HTTP redirects. So when I noticed that a survey script on was redirecting back to the survey page the first question was: where does it get the redirect target from?

Turns out there were two POST parameters determining the redirect target: one was the actual URL and the other was an additional parameter that would be added to it. There was some code validating the URL, anything that used unusual characters or non-Google URLs would simply be ignored, so far so good (it did think that is a Google URL however). But that additional parameter would be unescaped and then simply appended to the URL. So one could use "foo\nSet-Cookie: MyCookie=Hello, Google\n" as parameter value which would produce the output:

HTTP/1.1 302 Found
Set-Cookie: MyCookie=Hello, Google

Yes, that would set a cookie on the Google domain. And no, I didn’t bother to check whether Google is vulnerable to session fixation. It is more interesting to turn this into XSS vulnerability but this wasn’t as trivial as in Max’ example — this is a redirect after all and the browser would normally not display the content following the headers. It is possible to exploit persistent connections by specifying a too small Content-Length header, this tricks the browser into believing that the following content is the response to the next request on the same connection. But this option was prevented by some header validation mechanism on the server.

The other option is to make the redirect fail, the browser will then display the HTML content following the headers. This is usually done by inserting a second Location header with an invalid value like javascript:. The browser will only consider the last header found and won’t redirect. But this was prevented by the same header validation mechanism which “helpfully” concatenated the values of all Location headers. And then I noticed one more feature of that header validation — if the header contained invalid characters (like a tab character) it would remove the entire header. Which was very convenient in this case because removing the Location header was exactly what I wanted. So the parameter value "foo\t\n\n<script>alert("Got you!")</script>" produced the output:

HTTP/1.1 302 Found

<script>alert("Got you!")</script>

And that resulted in JavaScript executed in almost all browsers (with Internet Explorer being the only exception, it displayed a generic error page instead).

So what can you do to prevent this from happening in your own web applications? It is actually quite easy, always send headers through a framework that won’t let you send more than one header at once (typically by checking for CR and LF characters, like PHP header() function starting with PHP 4.4.2/5.1.2). If your framework doesn’t do it, write your own helper function — it is easy. And I think that the best policy for potentially dangerous headers is not removing them but failing. Yes, really. Throw an exception and don’t show anything but a 500 Internal Server Error response, this way you cannot do anything wrong.

The hidden XSS: error response

People who are doing security reviews often concentrate on the main functionality of the application, things like error handling are typically less tested. Which is why somebody looking for security vulnerabilities will always try to make the application show an error to see whether this message is somehow exploitable. For example, I noticed that the site optimizer tooltip script will simply show a link if the answer parameter is missing. What did strike me about this link was the empty hl parameter. Could it be taking over the hl parameter from the query string? Turns out it did — and it forgot to escape dangerous characters while doing so. Yep, another XSS vulnerability.

To state the obvious lesson here: unexpected conditions do happen, especially if your web application is being attacked. Very often a website is vulnerable simply because it quotes the requested URL in error messages. But you already know that you should always escape user input when it is inserted into HTML, and it is better to escape too much than too little. So I am concluding my already too long blog post.

Categories: ,


  1. Bill Gianopoulos

    In the HTTP Response Splitting case, you mentioned that your trick did not work in IE. This might lead people to believe that IE is somehow less vulnerable to this type of attack.

    I don’t believe that is the case. As far as I know, IE attempts to determine if the server returned a site customized error page, or the generic built-in to the server error message strictly on the basis of the length of the returned content, and only displays the IE error page for shorter server responses.

    Therefore, I think doing your same trick, but generating a much larger response message would have produced the same results under IE as you obtained on other browsers.

    Reply from Wladimir Palant:

    What you are saying is true for regular error pages like 404. However, in case of a bad redirect IE won’t show the content no matter what – I tried it with a large block of data just to be sure. Which doesn’t really mean that IE is immune to HTTP response splitting – it is only immune to this particular exploitation technique but there are many others.

  2. Jannick

    Excellent post. I wondered if you could recommend any comprehensive sources for learning about securing web applications. The internet is full of little nuggets, but it seems hard to find books or other sources that are both up-to-date and covers “everything” one needs to consider.

    Reply from Wladimir Palant:

    I had to piece these little nuggets together myself, with (and its forum) being the most useful source of information. That site lost much of its glory since then but Robert Hansen, the guy running it, published a book on attacks against web applications:

  3. >>>

    > < <<>..: “

  4. Colby Russell

    @>>>: Funny.

  5. Philip Tellis

    I’m curious to know what response times you see from Google v/s those from Yahoo! (based on your comment in paragraph 2).

    Reply from Wladimir Palant:

    Time to fix for Google: typically a few days, with instant confirmation. As to Yahoo, I didn’t bother contacting them recently so my information might be outdated. The first hurdle is finding a contact address for security issues. Then your mail is forwarded a bunch of times. With some luck somebody will eventually take a look at the issue.

  6. Damon Haidary

    Am I doing it right? ;]

    Reply from Wladimir Palant:

    Thank you, I will forward that issue to the Anwiki developer. And I should probably do a thorough security review for it – that’s the second XSS issue (I found the first one myself).

  7. memo

    I guess the getting paid part is quite interesting.

  8. NJ SEO Guy

    All this from Google, WOW! Mod_security perhaps?

  9. Damon Haidary

    Just an update on the XSS I posted.

    I see that the vulnerability is not your fault but rather anwiki. They just patched an XSS hole but the one I posted is still live. Also, I’ve found quite a few other XSS and CSRF vulnerabilities in their app. It looks like they have no idea what they’re doing as far as security is concerned. Maybe think about switching CMS’s? No simple feat I know but worth considering.

    Reply from Wladimir Palant:

    No, Anwiki has a security concept – it just fails to cover some areas of the application, I’ll have to look into that.

    If you want to test security of my web apps you are welcome to go through It is always good to have somebody else double-check the code.

  10. Philip Tellis

    You can use to contact Yahoo! security. I’m pretty certain we look at the issue as soon as we receive it ;) Yes, it may be forwarded to the team responsible for fixing the issue.

  11. lovelywcm

    I also found a vulnerability in our WIP project, thanks for nice writing!

Commenting has expired for this article.

← Older Newer →