Hacker News new | comments | show | ask | jobs | submit login
I discovered a browser bug (jakearchibald.com)
574 points by cgtyoder 4 months ago | hide | past | web | favorite | 130 comments



I can echo his experience reporting browser bugs and provide my own reviews:

Firefox - By far the best. Quick response, usually from engineers. If it's important the fix will be quick.

Edge - No reply for months / years. When I've gotten replies back it's been to ask me to try with the current version. When I do and the bug still exists it goes back at the bottom of the queue it seems.

Chrome - Somewhat of a mixed bag. Some times responses are quick, some times they are from engineers. But most often I get replies that convey the person I'm speaking too is a very green QA type. I've gotten replies that the test case I provided them doesn't reproduce the bug, because they had attempted loading it with the file:// protocol (of course hardly anything works with the file protocol). I'm not sure, do they expect me to include a web server for them?

Safari - Only tried a couple of times, never gotten a whisper back.

I would rate my experiences as:

Firefox - A+

Chrome - C

Edge - D

Safari - F


I reported a very serious one to Edge recently and the team told me they would "backlog it and might be on a future release".

Chrome had already patched it and Firefox never had it. (It was related to incorrect DOM spec implementation of document inheritance allowing cookie access from anywhere).

I'll do a full write up and blog it when I'm back from vacation but to summarize I was really unimpressed by the team at Edge and Microsoft Security.


Yeah that Chrome experience doesn't sound great. Fwiw I tend to put my test cases on jsbin or Glitch, but yeah, a Chrome engineer should know to put the page on a basic web server.

If anyone runs into problems like this, feel free to bug one of the Chrome dev rel folks, such as me.


Like I said, my impression is that often the people replying are not engineers. Here's one with that problem: https://bugs.chromium.org/p/chromium/issues/detail?id=674096...


Yeah, that's not great, but the commenter in #12 is a senior engineer on devtools, so at least the right person saw it in the end.


<soapbox> looking at that ticket, the response from google is catastrophic. the guy or gal handling the ticket has no idea about web browsers. the reporter shows up with a complete reproducer and the best google manages is throw clueless screenshots (for a brief fully textual error message no less!) at them? they've already done a bunch of your work for free. get your ass off the heap of money, ffs! </soapbox>

edit: oh, and i lower my hat to you for coming here and taking the heat. appreciated, having a real person to vent at is so special in the case of google...


To be fair, the screenshot does provide additional info, like the fact that they are using file://.. It's often better to include more in a screenshot than less.

I would imagine that the chromium teams probably gets a lot of tickets. That means they need a lot of bodies to triage them. It's not like any Sr. Dev ist going to be jumping at that job.


Yeah, it seems they need to take this issue seriously. Could be a way for hackers as well.


https://bugs.chromium.org/p/chromium/issues/detail?id=674096... was an assignment 6 months ago. It's still open after 2.5 years. I would not call this a good outcome.


Sure, but priorities are a thing. I don't think it's fair to compare this to an origin model exploit.


Yeah, this is a UI bug in devtools for a feature not a lot of people use. Plus web worker bugs always get pushed to the bottom of the queue for every browser. So I don't have any problem with how long my weird issues take to get fixed.

It's more about comparing to Firefox where you usually get a quick response which at least conveys that the person on the other end understands what is being reported. I'm fine with the eventual outcome here. A comment acknowledging that it's a real bug (or not) would have been better, but being assigned to someone sort of acknowledges that, I guess.


> Plus web worker bugs always get pushed to the bottom of the queue for every browser.

Why is that?


It's pretty rare to use in user-code. It might be well used in libraries, but I guess the standard developer experience is using libraries not writing them.


Of course it’s rare because it’s unreliable and buggy as shit across browsers atm (tho a lot better than it was).

So... somewhat self-fulfilling


They should take this kind of bugs seriously.


What would you rather happen? Browser vendors hire more people such that they can fix every valid bug ever filed?


Maybe.

A first good step would be if the billion dollar company would try to come up to the standard of the non profit.


*trillion dollar company.



Whoa, that page crashed my (long running) Nightly (presumably exhausted RAM completely) and in freshly started got 4 GB for the tab process (and survived). Nice.


> If anyone runs into problems like this, feel free to bug one of the Chrome dev rel folks, such as me.

Hi, and thanks for being part of creating Chrome && reaching out here.

Now that I have a Chrome guy here maybe you can help with another annoying issues:

-QC internally at Google seems to be blissfully unaware of the fact that other browsers exists.

Examples:

- one glaring issue I'm running into in core Angular on a daily basis.

- Google Calendar performance is sometimes unreasonably slow in Firefox

- For weeks or months Google search results would drive once CPU core to 100% twice a minute.


none of those are something a chrome dev rel person can help you with.

People like you are the reason we have to deal with three layers of low-level tech support before we can actually talk to somebody who knows what they're doing - please don't just spray your random complaints at anybody who happens to be related to google in any way.


And it's not like the Chrome dev rel team doesn't fight for these things, but ultimately the Chrome team doesn't have any control of prioritisation of bugs in other products (like Calendar).

The only way we're going to see a significant change is when those high up in the Google org chart start caring about products working well in other browsers; as long as it's up to each individual product we'll continue to see products prioritising time-to-market and targeting only the browser with the majority of the market.


That's the second reason to post it here: because if you see one googler there are hundreds nearby, and some of them might be at the relevant team.

(And if I point out often enough what happened last time a mega corp abused their position to market their browser then maybe some of them starts talking about it internally?)


> none of those are something a chrome dev rel person can help you with.

Chrome dev rel persons are Google employees. I've been surprised by how much can be fixed once somebody on the inside are aware of it.

Also it seems a lit of googlers hang around here so it might get picked up based on that too.

> People like you are the reason we have to deal with three layers of low-level tech support before we can actually talk to somebody who knows what they're doing -

Blame Google, not me. Where there is a bug tracker I often use it.

Unfortunately there isn't a public one for Search, Calendar etc.

When they choose to only engage on HN, and only with one guy from dev rel, that's where I'll post.


well thanks for doing your best to ensure they stop engaging on HN, i guess...


Tell me a better way to get Google to fix things, I'm all ears.

Complaining in social media, bugging people you know inside the walls and calling them out here works


You have the wrong attitude here. Twice, actually.

The fact that you haven't found a good way to get Google to fix things for you isn't an excuse to just charge forward with the best way that you have found.

Suppose you're stuck in traffic. The most effective way to move forward might really be to just lay on your horn. The people in front of you will probably eventually yield (most just eager to put distance between themselves & you, others generously assuming that you have some kind of emergency). That doesn't make it okay for you to do that.

The second time you had the wrong attitude is when you demanded that people teach you a better way. Maybe somebody on HN will generously extend that favor to you, but if they do then realize that they've done you a favor -- they didn't owe it to you.


Stop using Google products then. Get a new career.


And this is why you don't get immediate access to a dev folks!


I have direct access to Mozilla devs. Look how much I bother them.


Did you report the Safari bug to Apple or to WebKit.org? I've had good luck getting responses and fixes from the webkit guys


They seem to get the things done professionally and effectively.


Personally over many bug reports and many years:

A+ experience with the Chrome team

E with Edge

F with Safari

The Chrome devs have been astonishingly good. My experience has repeatably been that if they are given a good bug report, they get it fixed and often fix it really fast. Or they reply with excellent technical reasons why not. Also multiple times I have seen extreme corner-case regressions get magically fixed without me reporting anything!

Edge - always waffle and misdirection in reply to bug reports. I only bother to give them very well documented, repeatable bugs, plus attached repro file, for things I really need fixed. However: I did have one good experience recently because I reported the bug for the Edge beta, on a feature they were actively working on catching up to the other browsers (however they still had poor interaction).

Safari: hopeless. Report a critical bug with excellent repro file, and get no response, and no fix.

No rating for Firefox because I haven't bothered reported any bugs to them in the last few years.

Edit: this is for non-security related bugs.


Apple hears ya, Apple don't care.


Yeah apparently from people familiar with the deep black hole that is Radar you're supposed to keep submitting bugs, dupes count as a vote or something.

It sounds absolutely horrendous.


I once reported a chrome issue. Got no attention until I found a different issue I thought was related and commented there, which got the attention of someone competent who realized it was a hardware bug they'd fixed before but had a regression.

Not a security bug, just that a very large number of websites didn't load.


In other words, the experience gets progressively worse the bigger the company. I'm not surprised, having worked in companies of that scale the amount of bureaucracy and general red-tape to do the smallest of things is absolutely irritating.


Did you include a 90 day public disclosure window?


and Samsung's browser?


The Microsoft experience reminded me of the time when security@apple.com went to the building security office, who just quietly deleted bug reports. Poor processes amd communication is one of the worst classes of security problem.


Microsoft used to have a group, Trustworthy Computing (TWC), that was where all the security expertise lived. TWC was destroyed in 2014. From the outside, it seemed like that was the point where the reporter/outside security engagement story stopped, because the people that held responsibility for it Microsoft wide were either fired or re-orged into a role where they didn't have broad authority any more. Now, you get stuff like this, where people on the "security" group (which is squirreled away in a totally different part of Microsoft) don't have visibility into the Edge bug tracker any more.

Internally, Microsoft is organized a lot like the Federal government, only worse.


One can see a rationale in not having a security group - every team should have security focus (eg by having expertise & champions within each group). You can't tack on security, you have to build it in.


I disagree - the motivations of the security group and the product group are different. If the security team is under the product team leadership, the security team is disincentivized to interrupt a product launch due to a security issue, because they're rewarded by shipping a product, not making it secure. Really, you should have both: you should have a security team that sits with the product team and works with them through the lifecycle of the product, and has continuity (i.e. it's the same security people with your product team through the life of the product, mostly), but that security team reports to different management and has their promotions, bonuses, etc. handled by a different leadership chain than the product team.


I think you need both. Same as I think you need i10n and a8y at both layers.


A8y? That can't be a thing... if it is, we should put a stop to it. axb maybe?


Uhhh...what are i10n and a8y?


Uh, do you mean a11y?


Last time I reported a security issue to Microsoft I got reply same day and a confirmation that it was in fact an issue some day later. And then a few days later they notified me that my report was eligible for a bounty (I didn't have to ask).

This was the opposite experience of my previous report where the bug was acknowledged 9 months later and then fixed another 3 later.

I wonder if it just depends on whether your report ends up in a escalation path with lots of busy people.


Wonder how much report they got this time and that taking them so long to response.


product-security@apple.com is the real one these days.

Of course, this gets me thinking, what kind of super powers do these addresses have that allows people to send potentially malicious things there to be disassembled and analyzed? I suspect they are quarantined in some way, but it would be interesting to hear from the ops sec crowd how this gets handled.


Often badly, I remember Tavis O crashing Symantec's email server when sending an exploit using standard username/password combos, it unzipped it and crashed itself.

https://twitter.com/RyPeck/status/732405198644228096


Yes – I even got a nice email from someone apologizing about that and explaining that they were trying to get the security@apple.com people to at least forward messages when I did a full disclosure release after not receiving a response.


> they were trying to get the security@apple.com people to at least forward messages when I did a full disclosure release after not receiving a response

That sounds a bit dysfunctional on Apple's part that they can't exert that kind of control over their own employees for an issue with potentially enormously negative consequences.


That sounds a bit dysfunctional on Apple's part that they can't exert that kind of control over their own employees for an issue with potentially enormously negative consequences.

I'm not saying it isn't dysfunctional, but it sounds like every single large company I've ever worked with or for.

Especially when "security" is provided by a third-party security company.


This was awhile back in the unverified TLS certificate era so I'm assuming they got more serious about it.


Great to hear they are paying good attention to the concerns sent lately.


Our IT people were apparently incapable of creating a way to receive emails which didn't flag zip or tar files as security threats and block it. We've had to sometimes ask people stick things in dropbox and share it with us that way.

We've had similar issues with people submitting code for remote interviews.


That seems like a better approach to me. With email, you're at best exposing a disk-filling service to the internet and most likely looking at exploits running on servers with a fair amount of interesting data. There's also a fun race condition between the time a file is scanned and when someone opens it, which isn't as solved a problem as it should be.

With requesting that people send you a URL, you're in control of when and where it's accessed and things like Safe Browsing are visible to the recipient.


It's quite incredible how the web managed to get along with such a janky sandbox model.

It's a very important thing that users trust their browser and won't hesitate a second to enter an unknown URL. They see "going to a webpage" as the equivalent to looking at a poster in the street, not eating candy provided by a random stranger.

Eroding this trust would ruin it for everyone, even well behaved static websites without javascript.

Maybe it's time to reconsider giving the same execution rights to gmail and unknown web pages ?


Do you want to reinforce established monopolies? Because I can't think of a better way of doing that than having a technical difference between "trusted" and "untrusted" sites.


Well, I personally would be fine with the fair policy of disabling js everywhere but I'm sure most would not agree, so what's the alternative ?

If anything, Spectre class attacks really showed how hard it is to properly sandbox arbitrary programs.

Yes, the CPUs are complex, but the attacks happen on a high conceptual level, level at which the CPU is fairly simple. It's not like they rely on an obscure detail or bug.

No one (publicly) figured those out for 2 decades when the involved ideas (speculation, cache timings) are well known, common and did not change.

This indicates that for something with such a large surface as the various web standards, where both the spec and the implementations are changing all the time, there is very little hope.


I now use Brave browser exclusively, with JavaScript and other things turned off by default.

Turning it on for trusted websites is one click away, once per domain, and it could save me in the future.


What about differentiating applications and web sites? The line between the two is blurry, I know, but I would be happy if the document metaphor were divorced from the application one.


Absolutely, there should be a different port for web pages than applications. Even if we started by disabling js on port 80


But what about applications that link to web pages or web pages that link to applications?

What valid reason is there to have an "application" and any documentation or related HTML material from the same site on different ports? Or, as some have pointed out elsewhere when this has come up, to have "applications" and "documents" use completely different protocols, languages and native clients, when both are often used together?


'Application' can be backward compatible with documents just fine. That does not mean a new category 'Document' that has reduced capability is useless.

Banking websites for example don't need to be Applications and added protection for cross-site scripting etc. would be beneficial. Restrict things further and you default to supporting screen readers etc.


What definitions of "application" and "document" are being used here? Banking websites are applications in terms of their functionality - they're certainly not documents. At least not the parts where I can access and modify my account.


Client side code including third party media players etc.

IMO it's a simple question 'can you do the same thing with a sheet of printed paper.' I can fill out paper forms and hand them to someone just fine. Don't forget a Check is really just a piece of paper with a form on it.


So a spreadsheet running in the client with javascript or WASM would be an application, but a spreadsheet running on the backend wouldn't?

I'm not trying to be overly pedantic or combative here but making a distinction between client-side and server-side code seems arbitrary. I understand it in terms of managing privilege - you can't control what someone does on a remote server, and that code isn't running on your machine, but it seems like the meaningful distinction here isn't between documents and applications but between local and remote applications.


The entire point is managing privileges.

Just because I type in yourdomain.com does not mean I want you to be able to start playing death metal from my speakers. What about typing yourdomain.com means I want you to break my back button? Show a popup rather than close the browser? Churn CPU cycles crypto mining or do just about anything beyond hand me a document? Display a flashing GIF?

The current model is basically handing complete control over my machine to a third party that may be compromised by anyone any time I click a random link.

The single greatest web innovation in the last 30 years was readability mode.


>Just because I type in yourdomain.com does not mean I want you to be able to start playing death metal from my speakers.

Given the way HTTP works, I think it kind of does. It means you want the server at yourdomain.com to send you whatever content that URL points to, if anything. Which, granted, given the complexity of the web now, does seem fraught with danger, but what alternative would there be? Profiling each site for embedded content, size and complexity and whitelisting the elements before rendering? Browsers already let you block scripts, disable images and autoplay, overwrite or disable stylesheets and mute tabs, that would seem to be sufficient.

>The current model is basically handing complete control over my machine to a third party that may be compromised by anyone any time I click a random link.

That's a good point, but separating "applications" from "documents" wouldn't solve that problem, since that's presumably the model the applications would still be using. Sure, static pages that aren't running client side code would be safe, but those pages already are safe.

Anything that could be done to make a separate application space run safely could be done to make them run safely on the existing web, couldn't it?


>>Just because I type in yourdomain.com does not mean I want you to be able to start playing death metal from my speakers.

>Given the way HTTP works, I think it kind of does. It means you want the server at yourdomain.com to send you whatever content that URL points to, if anything.

I think the issue is really that the client (web browser) shouldn't try to interpret whatever was sent back if it isn't considered safe. In other words, there should be limits of what the browser will do with whatever is send back over HTTP.


Anything that could be done to make a separate application space run safely could be done to make them run safely on the existing web, couldn't it?

Yes, but you could tighten things down and treat the modern web as a transient application delivery system. Where users have to explicitly grant access to each application, and grant it whichever specific permissions it needs. I also wouldn't be against having two browsers, one for running applications and one to browse the web.


> The single greatest web innovation in the last 30 years was readability mode.

For you. From what I see, everyone else seems to be enjoying the dancing and singing monkeys online.


the monkeys[1] are enjoying themselves, dancing and singing, at the expense of everybody else.

[1] web application pushers

edit: hn "syntax"


By monkeys I meant literal monkeys, animations and videos and games and generally “frivolous” features.


The problem is that the dancing and singing monkeys enabled advertising to be increasingly lucrative to the point where the monkeys are running the show.


I agree, but it would be much easier if we had a new document-only port instead, since it wouldn't break most of the Internet.


how about 80 vs 443? :)


Well, one would like to have their document safe from snooping and modification too.


That's a complete non-starter as long as advertising pays for the web pages. Or even longer, if the replacement compensation methods require JS as well.


Maybe there would be less money in it, but it is possible to do advertising completely on the backend as well. You send the advertiser some targeting info, e.g.: what does the article(s) on the page talk about, what is your site about, what is an estimated profile of your readers, and the ad network gives you the ad images and text in some nice format.

Yes, it does require confidence that the publisher will actually show your ad, but it was the same thing for the journals. On the web you can still track outgoing links and have the referer, so you can know the actual impact of your ads anyways.


Aren't we sort of starting down that path already? IIRC Chrome only allows certain operations on HTTPS domains as of late such as webcam or microphone access.

It seems like this sort of iterative securing of different things over time could be a good way to secure the web while also giving time for older sites to upgrade.


Anyone can get an HTTPS cert, and with Let's Encrypt it's free. Restricting dangerous features to only websites that can demonstrate their traffic hasn't been man-in-the-middled is very different from giving established sites more expansive permissions.

(Disclosure: I work at Google, though not on browsers)


The key thing is differentiating one set of pages from another set of pages - putting security boundaries into a hypertext system that was originally designed to allow mixing resources from different sources.


The web didn't used to be able to do much, and we're using browsers that depended on tons of multi-decade old code, so I see how it happened. Agreed on the main point though.


I use a private window for banking/paypal , I don't trust the extensions or the other tabs so for this cases I get more security.


That hardly helps. For true security devote a device purely to banking. Preferably a diskless device running an updated live CD on a security oriented distro with no rewritable storage attached connecting out over a VPN through an equally dedicated firewalled router. Then you're just left to worry about your bioses getting infected off an unpatched or 0d exploit.


He identified his threat model (other tabs + addons doing something shady) and made a security assessment based off of it.

You're here bullshitting that he needs "true security" like he's dealing with APTs trying to access his bank account. He's not. He's concerned about other tabs + addons, and private browsing mode is a solution with the slightest friction for his threat model.

Please, in the future, try making security assessments based on the actual threat model.

EDIT: "threat" instead of "thread".


This was pretty confusing to read until it came to my mind that "threat model" exists :)


If there were a big target on my back I would do that, but since I am running Linux, I am not a rich person or have an important job I assume that I will be attacked by regular malware and not skilled hackers.


Some browsers allow you to have many profiles. So you can use one profile for banking and e-mail, and another profile when browsing dubious sites.


Sorry, but we need a Turing-complete language for ads and tracking.

Preferably with JITting, and unfettered access to the GPU and other misc. peripherals like GPS, webcam, etc. In return you get free cat videos.

You’re welcome.


No. The server just needs to send over an image and log the IP of the requestor for reconciliation at the end of the month.


The really ironic thing is trying to decide (before the internet was ever a thing, say 50 years ago) which of the 2 scenarios is more dystopian.


…and with unfettered access to USB devices. WebUSB my ass.


How else are we going to tailor our adverts based on the music on your iPod Shuffle?


No. The burden needs to be on the user to understand their own security. If we stopped taking the burden out of user's hands and tried to ensure that everyone on the Internet understood that anything they access becomes data on their computer/device, we'd have a smarter Internet. Frankly, I think if we made people understand that they have a responsibility to choose what they download, there might be more vocal group demanding the ability to do whatever they want with data transmitted to their computer, save for directly malicious acts against other users.

The browser should be only two things: a client between a user and a server, passing information; and a parser which displays that information on the client-side. The moment a browser alone begins controlling what the user sees, or does not see, without the user having the ability to control it, we have a major problem.

That becomes a security problem, a privacy problem, and a functionality problem. All data on the Internet should be treated the same by all browsers' client functions. The display may vary (e.g. the difference between Lynx and Firefox), but all data should be treated equally and the user should have both the authority and the responsibility for their own computer.


> The moment a browser alone begins controlling what the user sees, or does not see, without the user having the ability to control it, we have a major problem.

What you're describing would be inordinately taxing for even the most experienced developer, not to mention the average internet user. The only way this could possibly be viable would be if we used gopher:// instead of http(s)://

Currently, there are tools such as Privoxy Actions & Filters, which allow you to do 100% of what you're describing, Greasemonkey which allows you to do ~80% of what you describe, or uMatrix which allow you to do quite a lot. The prerequisites for using those range from full-blown programming skill (for the former 2) to managing a relatively advanced in-browser UI (for uMatrix), and having a lot of spare time. For every single webpage you visit on the web. This isn't viable for 99% of people.


>The prerequisites for using [Privoxy & Greasemonkey] range from full-blown programming skill...

Neither Privoxy nor GreaseMonkey require actual programming knowledge. I do not program, and I use Greasemonkey with some regularity - I use a combination of userscripts.org scripts and my own. They require a basic knowledge of specific scripting language implementations. Besides, Greasemonkey & uBlock/uMatrix both have a right-click menu entry that amounts to "hide anything like this".

You're saying that 99% of the Internet's users can't handle being required to interact with the most basic front-end technologies which power the network they use every day, and which they willingly give up their private information to - thus having no ability to provide evidence for their trust or any expectation of their privacy. Frankly speaking, my mindset is that if it's not viable for them to understand it, it shouldn't be viable for them to use it. Uneducated users lead to nothing but trouble; and sure, I'll grant that I'm suggesting educating them the hard way, but I think the Web as a whole would be better off in the long run from smarter users and dumber clients. Heck, I think the whole world would be better off with smarter users and dumber clients/terminals/systems.

Also, please, don't even begin to suggest that uneducated users should be directed to Gopher holes. You know as well as I do that if there's enough people going back to it, some yutz is gonna start trying to figure out how to add streaming this and scripted that to Gopher, and then Gopherspace will be ruined. And sure, on a technical level it would be a neat project to look at. But to reference Jurassic Park, a lot of very smart people have been so amazed at what they could do with the Web and the Internet as a whole, that they never really stopped to ask if they should do it.


> Oh, I guess the vulnerability needs an extremely tenuous name and logo right? Here goes

I admire the extra touch here :)


I enjoyed the WhatsApp-looking box that explained the server/client conversation.


The PDF was great too ;-)


lol no


For people downvoting forgot-my-pw, "lol no" was the complete content of the pdf.


I, too, discovered a browser bug. Specifically with mutation observers in Safari (but not Chrome, or other WebKit-likes) in a particular DOM event scenario. Fully replicable. Not a word from any team at Apple, no acknowledgement of the bug, no acknowledgment of the issue.

The situation is a common one wrt SPAs, routing, and changing a tree based on history state. I'm sure other frameworks have run into it. My brief experience documenting the issue solidified the position that I will never do it again.


This is really nice research! Simple, effective, and brutal.

This reminds me of the research that went into finding issues in the media plugin models. Essentially, once the security community discovered that Java and Flash, etc, plugins didn't follow the same rules as the browser at all times - it became a free bug hunting exercise until the media plugin model just died.

I expect there are some "side channel" type ways to create high resolution timers in browsers which have removed built in support for them, for instance: WebAssembly? WebGL subroutines?

Anyway, congratulations.


This was such a nasty bug for Edge. Visiting any page means I could now read your private Messenger messages, or your email. You could even automate resetting the password to an account, and then automatically exfiltrating the URL!


superlogout.com v2.0


That's a really well-explained and clearly presented writeup of the bug and how it can be exploited as a vulnerability.


I've found a couple of browser bugs in different browsers (but nothing security-related). Nothing I've reported to browser teams has ever been fixed, even with simple standalone test cases. It's definitely easier just to write a workaround and call it good.


Microsoft claims to be developer friendly these days, but they are clearly not white-hat friendly.


My entirely anecdotal experience is that they are white-hat friendly some time. My last experience was super good.


I think it depends on the team.


Another symptom of browser specs getting too complicated.


In this case it was a symptom of them being too simple. The use of range requests wasn't specified.


This just happened to be two anecdotes with 2 browser dev teams that should not be generalized.

Everyone who has to deal with n-th layer tech support regularly (where n > 2) knows that even there it's hit or miss. Sometimes you file a bug report and get a "thanks, fixed!" an hour later. Sometimes you spend an hour to gather all the data upfront only to be painstakingly taken through the exact same data gathering process step by step. By email. Over days. On a "4h response" SLA (and they always just barely make it, not considering the value of the "response").

Randall Munroe has the best description: https://www.xkcd.com/806/


I'm not familiar with the Web Audio APIs, was the Edge bug effectively interpreting the stream of bytes from the cross origin request as an 'audio stream', and then the OP just wrote a thing to convert it back so it could be converted into a string?


A stream of bytes is valid PCM (that's the point). The issue is that Edge would first allow a redirect to a cross-origin then would leak the entirety of the cross-origin data by allowing it through the Web Audio API — an API for low-level audio processing and synthesizing — ultimately allowing the attacker to retrieve the cross-origin resource.


> Lol no.

That hurts, Jake :(


bwhahahaha


Is it Tuesday?


Nice one!


tip of the iceberg?


[flagged]


Could you please not post unsubstantive comments to Hacker News? that's not what this site is for.

https://news.ycombinator.com/newsguidelines.html


First paragraph made me chuckle.


  For example, the request may have the following header:
  Range: bytes=50-100
  …which is requesting bytes 50-100 (inclusive) of the resource.
I haven't finished the article, but I've seen how this movie ends...

con22 4 months ago [flagged]

hn bet big money on firefox/mozilla? all news for other web browser is bad except firefox. HN now is mozilla's Microphone


HN doesn't bet any money on anything. Please don't post unsubstantive comments here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: