Observations on managed bug bounty programs

I’ve been increasingly using Bugcrowd lately, a platform that manages security bug bounty programs for its clients and allows security researchers to contribute to a number of such programs easily. Previously, I’ve mostly reported security issues in Mozilla and Google products. Both companies manage their bug bounty programs themselves and are very invested in security, so Bugcrowd came as a considerable culture shock.

First of all, it appears that many companies consider bug bounty programs an alternative to building solid in-house security expertise. They will patch whatever bugs are reported, but they don’t seem to draw any conclusions about the deficiencies in their security architecture. Eventually, even the most insecure application will have enough patches applied that finding new issues takes too much effort for the monetary rewards offered. At that point, almost no new reports will be coming in and for the management it’s “mission accomplished” I guess. Sadly, with security being an afterthought the product remains inherently insecure, even the smallest change could potentially open new security holes.

Actually, Bugcrowd makes it very easy to take that route. The majority of the bug bounty programs are private, meaning that not only are security researchers forbidden to discuss the issues they find, they aren’t even allowed to discuss the existence of the bug bounty program. So the vendors don’t have to fear publicity when their product (which is sometimes supposed to be a security product) turns out to be full of critical bugs.

Communication with security researchers is also remarkable. Recently, I reported a major vulnerability that allowed websites to inject code into a browser extension. I noted all the things that this code could potentially do, such as reading cookies from any website. But my proof of concept was limited to retrieving user’s data and showing their user name. Here is a reply that I received:

I was able to verify the issue. Can you create another poc, that reads some sensitive information(like passwords for example), so we can make a case for a higher priority? For now, this seems equivalent to a reflected XSS/ P3 to me.

I’m used to providing a minimal proof of concept, and this isn’t the first time that I was asked to demonstrate the issue “properly” on Bugcrowd. This comment finally made me realize the problem: with Mozilla and Google I used to communicate with developers. On Bugcrowd however, the triaging is often being done by people who cannot analyze a proof of concept and merely try to categorize an issue in terms of the vulnerability rating taxonomy.

With out of the box vulnerabilities and particularly vulnerabilities involving browser extensions that categorization becomes a very non-trivial task. It doesn’t help that many companies apparently outsourced the first contact to Bugcrowd’s “experts.” These might indeed have great knowledge of security issues, but occasionally they will have even less product knowledge than me. As a consequence, the important information in your report isn’t considered the line of code causing the issue but rather the proof of concept which shows exactly what harm one could do with it. There is a clear monetary incentive for that: you won’t be paid more if you can pinpoint the issue in the application or provide good recommendations, yet you will definitely get paid less if you underestimate or fail to communicate the scope of the issue you discovered.

The consequence for me (and others before me as well it seems) is that participating on Bugcrowd requires an attitude change. Normally, I report security issues because I want a product to be secure. So I will report all issues I notice no matter how minor, and I will occasionally provide recommendations on addressing this entire class of problems. With Bugcrowd, this approach doesn’t work. For example, the few clickjacking issues I reported didn’t go anywhere because the proof of concept wasn’t reliable enough. I could probably produce a reliable proof of concept, but for clickjacking this is lots of work. Yet clickjacking is a P4 issue that will typically be rewarded with $100. Investing so much time in minor issues just doesn’t pay off, and neither does writing recommendations that most likely won’t make their way to the developers. Worse yet, too many reports of minor issues will degrade my rating on Bugcrowd and prevent me from being invited to private bug bounty programs.

Now Bugcrowd isn’t the only platform in this field, HackerOne being the other big player. I don’t have enough experience with HackerOne yet, so I cannot tell whether the same issues are present there. If somebody knows, I’d love to hear.

Comments

  • Sam Houston @ Bugcrowd

    Hi Wladimir, thanks for the feedback on Bugcrowd. We’ve been sharing this blog post internally and discussing it :)

    On your first point of some companies not investigating deeper security issues, I don’t think that’s a bug bounty thing. I think that’s a SDLC/internal security culture thing. A bug bounty will only exist inside of the current security culture and practices at a company, it doesn’t automatically create an internal system that perpetuates behavior. If a company isn’t properly investing in their SDLC, proper training of developers, or their security team isn’t resourced or empowered enough to address systemic issues, a bug bounty won’t necessarily be able to change that situation. It’s important that all companies work hard to create a healthy and functioning internal SDLC and vuln patching/handling process.

    Our internal security team that validates/triages bugs (Application Security Engineers, or ASE’s) are folks that have quite a bit of pentesting and security experience. What they’re asking you for is information that they can use to help explain to the customer, and for them to explain to others in their organization, the impact of the bug. Your POCs and write-ups help hasten the process of rewarding your bug, prioritizing it internally, and ultimately getting it fixed.

    It’s not necessarily or always the case that we don’t understand the bug (though that happens!), it’s more so that we want to make sure that everyone along the chain understands the impact of your finding.

    Counter to what you think, your write-ups and POCs can often be shared with the customer’s development team. Bugcrowd integrates with a team’s JIRA, Slack, etc – which makes it so your submissions can be distributed through their internal processes and systems. Or, of course, those folks can just login to Bugcrowd to see that information.

    I realize that the managed bug bounty is a little bit of a different experience than what you’re used to with Mozilla or Google, but I’m hopeful that you’ll find it to be of your liking eventually. Without this management service, most companies wouldn’t have a bug bounty to begin with, or their self-managed bounty would provide a less than stellar experience for all involved. That’s why Bugcrowd primarily only provides managed bounties to our customers.

    Thanks for your time – always open to hear feedback and enjoy the discussion!

    Sam Houston
    Senior Community Manager @ Bugcrowd

    Wladimir Palant

    If my post sounds like I am criticizing Bugcrowd, that’s not really the intention. The main issue is indeed companies who decide to run a bug bounty program without having a healthy security culture. While that will do to get the obvious issues fixed, it won’t necessarily make the overall product more secure.

    As to Bugcrowd staff handling submissions, I don’t really have enough information to judge their competence. My first issue here was that I didn’t recognize them as Bugcrowd employees. Sure, they have “_bugcrowd” suffix in their names, but that didn’t make me realize that they weren’t employed by the product vendor. So I was stunned when they only seemed to be familiar with the impact of web server vulnerabilities and not browser extension vulnerabilities for example. Also, it was too obvious that they didn’t have product developers to talk to when something was unclear. So the burden of explaining the impact of “unusual” issues lies on me here, and with the level of detail expected it’s not worth doing for issues that will likely be (mis-)prioritized as P4.

    Mind you, it took Mozilla lots of time to make their bug bounty program run smoothly. While they would always involve developers early on and judge the impact of an issue properly without my help, there were other issues. For example, the steps required to get a bug bounty for a submitted report used to be not quite straightforward, and the delays could be significant. And reports affecting web services would be assigned to the wrong group by default, so that the relevant people couldn’t see them.