u/6W99ocQnb8Zy17

April bounty stats

For those that haven't read the crap that I write before, I have tended to only log reports that are high-impact and above. The reasoning behind that is that I can't be arsed to create a PoC, write up a report, and argue the toss with triage for $100. But even so, something like 80% of my reports leave me feeling messed around anyway, mostly through being descoped or randomly downgraded.

I had a theory that the high-impact reports were getting messed around disproportionally in comparison to the low-impact reports, so for April I decided to log everything I found.

Some are still in triage (platform or programme), but the results so far are:

3x high-impact

  • 1x accepted but downgraded (stored XSS downgraded to medium)
  • 1x descoped by programme ("no longer accepting submissions for this host")
  • 1x rejected by platform (triage error: commented, and will resubmit if no response)

6x medium-impact

  • 1x accepted and already paid out as per scope
  • 3x still in triage
  • 1x descoped by programme ("no longer accepting this type of bug")
  • 1x rejected by platform (triage error: commented, and will resubmit if no response)

It's a limited set of data, and the final outcome has yet to be decided for the majority, but the general feel is that pretty much all the reports get messed around just the same ;)

reddit.com
u/6W99ocQnb8Zy17 — 21 hours ago

TL;DR the platforms are not independent but instead a direct competitor of the researchers

There may have been a time when the main platforms operated independently, but since they took PE funding (and it became all about the profit) they have used various ways to leverage the reports they get from the researchers, such as selling the techniques and data to WAF vendors.

However, since some of them introduced a pentest as a service product (PTaaS), they have become even more obvious and overt about this.

As I have mentioned before, I tend to do a lot of my own custom research, and for a collection of the bugs that interest me, the discovery is waaaaay more complicated than the PoC (which is always a one-click script).

For example, desync or request header injection. For both, I have fully automated workflows that hunt-out the raw vector, then permute the possible attacks to find workable payloads. From there, I then manually finesse them into a clean PoC script which goes into the report.

But because it is often difficult to see from the PoC how to detect the underlying bug, it isn't unusual for the platform triage to ask questions about the detection approach (which I decline to answer).

In the last year, H1 in particular have become noticeably more bold about this, and the worst example so far was on a desync I logged earlier this year. The H1 platform triage literally refused to escalate the bug to the programme until I explained to their "internal team" how to scan for it. And it wasn't until I lolled and said no that they back-peddled.

reddit.com
u/6W99ocQnb8Zy17 — 5 days ago

TL;DR the sooner you start your own research, the sooner you'll find bugs

As I have mentioned before, I tend to do a lot of custom research for techniques, which I use for red team, pentest and BB.

For BB specifically, I find that the most effective ones aren't the ground-up new stuff (which quickly end up in a WAF or being reused/sold by the platform), but actually it will be the edge-cases of existing, well-known techniques.

I'll take a class which has already plateaued, and then extend it to be empirical in some way. For example cache deception. When the original paper came out, the BB feeds were full of examples, but over the next 18-months or so pretty much dried up.

So, my approach to the research was to build a grid of all the caches, and map the extensions that they treated as static, and the separators/encodings that they treated as path characters. Then I did the same with the common app stacks. And then the intersection is where I found the highest probability of something getting stuck in a cache unintentionally. Easy as pie.

It is also worth noting that research isn't a one-off process, but instead it is a constant moving target. New tech is being released all the time (there are multiple new image file extensions being adopted right now), and config defaults change too. So to ensure I stay up-to-date, I tend to re-run the above cycle periodically, and adapt my workflow to match.

Nothing magical. Just some diligence and fun experimentation ;)

reddit.com
u/6W99ocQnb8Zy17 — 7 days ago