Firefox - By far the best. Quick response, usually from engineers. If it's important the fix will be quick.
Edge - No reply for months / years. When I've gotten replies back it's been to ask me to try with the current version. When I do and the bug still exists it goes back at the bottom of the queue it seems.
Chrome - Somewhat of a mixed bag. Some times responses are quick, some times they are from engineers. But most often I get replies that convey the person I'm speaking too is a very green QA type. I've gotten replies that the test case I provided them doesn't reproduce the bug, because they had attempted loading it with the file:// protocol (of course hardly anything works with the file protocol). I'm not sure, do they expect me to include a web server for them?
Safari - Only tried a couple of times, never gotten a whisper back.
I would rate my experiences as:
Firefox - A+
Chrome - C
Edge - D
Safari - F
Chrome had already patched it and Firefox never had it. (It was related to incorrect DOM spec implementation of document inheritance allowing cookie access from anywhere).
I'll do a full write up and blog it when I'm back from vacation but to summarize I was really unimpressed by the team at Edge and Microsoft Security.
If anyone runs into problems like this, feel free to bug one of the Chrome dev rel folks, such as me.
edit: oh, and i lower my hat to you for coming here and taking the heat. appreciated, having a real person to vent at is so special in the case of google...
I would imagine that the chromium teams probably gets a lot of tickets. That means they need a lot of bodies to triage them. It's not like any Sr. Dev ist going to be jumping at that job.
It's more about comparing to Firefox where you usually get a quick response which at least conveys that the person on the other end understands what is being reported. I'm fine with the eventual outcome here. A comment acknowledging that it's a real bug (or not) would have been better, but being assigned to someone sort of acknowledges that, I guess.
Why is that?
So... somewhat self-fulfilling
A first good step would be if the billion dollar company would try to come up to the standard of the non profit.
E.g. a js closure memory leak:
Hi, and thanks for being part of creating Chrome && reaching out here.
Now that I have a Chrome guy here maybe you can help with another annoying issues:
-QC internally at Google seems to be blissfully unaware of the fact that other browsers exists.
- one glaring issue I'm running into in core Angular on a daily basis.
- Google Calendar performance is sometimes unreasonably slow in Firefox
- For weeks or months Google search results would drive once CPU core to 100% twice a minute.
People like you are the reason we have to deal with three layers of low-level tech support before we can actually talk to somebody who knows what they're doing - please don't just spray your random complaints at anybody who happens to be related to google in any way.
The only way we're going to see a significant change is when those high up in the Google org chart start caring about products working well in other browsers; as long as it's up to each individual product we'll continue to see products prioritising time-to-market and targeting only the browser with the majority of the market.
(And if I point out often enough what happened last time a mega corp abused their position to market their browser then maybe some of them starts talking about it internally?)
Chrome dev rel persons are Google employees. I've been surprised by how much can be fixed once somebody on the inside are aware of it.
Also it seems a lit of googlers hang around here so it might get picked up based on that too.
> People like you are the reason we have to deal with three layers of low-level tech support before we can actually talk to somebody who knows what they're doing -
Blame Google, not me. Where there is a bug tracker I often use it.
Unfortunately there isn't a public one for Search, Calendar etc.
When they choose to only engage on HN, and only with one guy from dev rel, that's where I'll post.
Complaining in social media, bugging people you know inside the walls and calling them out here works
The fact that you haven't found a good way to get Google to fix things for you isn't an excuse to just charge forward with the best way that you have found.
Suppose you're stuck in traffic. The most effective way to move forward might really be to just lay on your horn. The people in front of you will probably eventually yield (most just eager to put distance between themselves & you, others generously assuming that you have some kind of emergency). That doesn't make it okay for you to do that.
The second time you had the wrong attitude is when you demanded that people teach you a better way. Maybe somebody on HN will generously extend that favor to you, but if they do then realize that they've done you a favor -- they didn't owe it to you.
A+ experience with the Chrome team
E with Edge
F with Safari
The Chrome devs have been astonishingly good. My experience has repeatably been that if they are given a good bug report, they get it fixed and often fix it really fast. Or they reply with excellent technical reasons why not. Also multiple times I have seen extreme corner-case regressions get magically fixed without me reporting anything!
Edge - always waffle and misdirection in reply to bug reports. I only bother to give them very well documented, repeatable bugs, plus attached repro file, for things I really need fixed. However: I did have one good experience recently because I reported the bug for the Edge beta, on a feature they were actively working on catching up to the other browsers (however they still had poor interaction).
Safari: hopeless. Report a critical bug with excellent repro file, and get no response, and no fix.
No rating for Firefox because I haven't bothered reported any bugs to them in the last few years.
Edit: this is for non-security related bugs.
It sounds absolutely horrendous.
Not a security bug, just that a very large number of websites didn't load.
Internally, Microsoft is organized a lot like the Federal government, only worse.
This was the opposite experience of my previous report where the bug was acknowledged 9 months later and then fixed another 3 later.
I wonder if it just depends on whether your report ends up in a escalation path with lots of busy people.
Of course, this gets me thinking, what kind of super powers do these addresses have that allows people to send potentially malicious things there to be disassembled and analyzed? I suspect they are quarantined in some way, but it would be interesting to hear from the ops sec crowd how this gets handled.
That sounds a bit dysfunctional on Apple's part that they can't exert that kind of control over their own employees for an issue with potentially enormously negative consequences.
I'm not saying it isn't dysfunctional, but it sounds like every single large company I've ever worked with or for.
Especially when "security" is provided by a third-party security company.
We've had similar issues with people submitting code for remote interviews.
With requesting that people send you a URL, you're in control of when and where it's accessed and things like Safe Browsing are visible to the recipient.
It's a very important thing that users trust their browser and won't hesitate a second to enter an unknown URL. They see "going to a webpage" as the equivalent to looking at a poster in the street, not eating candy provided by a random stranger.
Maybe it's time to reconsider giving the same execution rights to gmail and unknown web pages ?
If anything, Spectre class attacks really showed how hard it is to properly sandbox arbitrary programs.
Yes, the CPUs are complex, but the attacks happen on a high conceptual level, level at which the CPU is fairly simple. It's not like they rely on an obscure detail or bug.
No one (publicly) figured those out for 2 decades when the involved ideas (speculation, cache timings) are well known, common and did not change.
This indicates that for something with such a large surface as the various web standards, where both the spec and the implementations are changing all the time, there is very little hope.
Turning it on for trusted websites is one click away, once per domain, and it could save me in the future.
What valid reason is there to have an "application" and any documentation or related HTML material from the same site on different ports? Or, as some have pointed out elsewhere when this has come up, to have "applications" and "documents" use completely different protocols, languages and native clients, when both are often used together?
Banking websites for example don't need to be Applications and added protection for cross-site scripting etc. would be beneficial. Restrict things further and you default to supporting screen readers etc.
IMO it's a simple question 'can you do the same thing with a sheet of printed paper.' I can fill out paper forms and hand them to someone just fine. Don't forget a Check is really just a piece of paper with a form on it.
I'm not trying to be overly pedantic or combative here but making a distinction between client-side and server-side code seems arbitrary. I understand it in terms of managing privilege - you can't control what someone does on a remote server, and that code isn't running on your machine, but it seems like the meaningful distinction here isn't between documents and applications but between local and remote applications.
Just because I type in yourdomain.com does not mean I want you to be able to start playing death metal from my speakers. What about typing yourdomain.com means I want you to break my back button? Show a popup rather than close the browser? Churn CPU cycles crypto mining or do just about anything beyond hand me a document? Display a flashing GIF?
The current model is basically handing complete control over my machine to a third party that may be compromised by anyone any time I click a random link.
The single greatest web innovation in the last 30 years was readability mode.
Given the way HTTP works, I think it kind of does. It means you want the server at yourdomain.com to send you whatever content that URL points to, if anything. Which, granted, given the complexity of the web now, does seem fraught with danger, but what alternative would there be? Profiling each site for embedded content, size and complexity and whitelisting the elements before rendering? Browsers already let you block scripts, disable images and autoplay, overwrite or disable stylesheets and mute tabs, that would seem to be sufficient.
>The current model is basically handing complete control over my machine to a third party that may be compromised by anyone any time I click a random link.
That's a good point, but separating "applications" from "documents" wouldn't solve that problem, since that's presumably the model the applications would still be using. Sure, static pages that aren't running client side code would be safe, but those pages already are safe.
Anything that could be done to make a separate application space run safely could be done to make them run safely on the existing web, couldn't it?
>Given the way HTTP works, I think it kind of does. It means you want the server at yourdomain.com to send you whatever content that URL points to, if anything.
I think the issue is really that the client (web browser) shouldn't try to interpret whatever was sent back if it isn't considered safe. In other words, there should be limits of what the browser will do with whatever is send back over HTTP.
Yes, but you could tighten things down and treat the modern web as a transient application delivery system. Where users have to explicitly grant access to each application, and grant it whichever specific permissions it needs. I also wouldn't be against having two browsers, one for running applications and one to browse the web.
For you. From what I see, everyone else seems to be enjoying the dancing and singing monkeys online.
 web application pushers
edit: hn "syntax"
Yes, it does require confidence that the publisher will actually show your ad, but it was the same thing for the journals. On the web you can still track outgoing links and have the referer, so you can know the actual impact of your ads anyways.
It seems like this sort of iterative securing of different things over time could be a good way to secure the web while also giving time for older sites to upgrade.
(Disclosure: I work at Google, though not on browsers)
You're here bullshitting that he needs "true security" like he's dealing with APTs trying to access his bank account. He's not. He's concerned about other tabs + addons, and private browsing mode is a solution with the slightest friction for his threat model.
Please, in the future, try making security assessments based on the actual threat model.
EDIT: "threat" instead of "thread".
Preferably with JITting, and unfettered access to the GPU and other misc. peripherals like GPS, webcam, etc. In return you get free cat videos.
The browser should be only two things: a client between a user and a server, passing information; and a parser which displays that information on the client-side. The moment a browser alone begins controlling what the user sees, or does not see, without the user having the ability to control it, we have a major problem.
That becomes a security problem, a privacy problem, and a functionality problem. All data on the Internet should be treated the same by all browsers' client functions. The display may vary (e.g. the difference between Lynx and Firefox), but all data should be treated equally and the user should have both the authority and the responsibility for their own computer.
What you're describing would be inordinately taxing for even the most experienced developer, not to mention the average internet user. The only way this could possibly be viable would be if we used gopher:// instead of http(s)://
Currently, there are tools such as Privoxy Actions & Filters, which allow you to do 100% of what you're describing, Greasemonkey which allows you to do ~80% of what you describe, or uMatrix which allow you to do quite a lot. The prerequisites for using those range from full-blown programming skill (for the former 2) to managing a relatively advanced in-browser UI (for uMatrix), and having a lot of spare time. For every single webpage you visit on the web. This isn't viable for 99% of people.
Neither Privoxy nor GreaseMonkey require actual programming knowledge. I do not program, and I use Greasemonkey with some regularity - I use a combination of userscripts.org scripts and my own. They require a basic knowledge of specific scripting language implementations. Besides, Greasemonkey & uBlock/uMatrix both have a right-click menu entry that amounts to "hide anything like this".
You're saying that 99% of the Internet's users can't handle being required to interact with the most basic front-end technologies which power the network they use every day, and which they willingly give up their private information to - thus having no ability to provide evidence for their trust or any expectation of their privacy. Frankly speaking, my mindset is that if it's not viable for them to understand it, it shouldn't be viable for them to use it. Uneducated users lead to nothing but trouble; and sure, I'll grant that I'm suggesting educating them the hard way, but I think the Web as a whole would be better off in the long run from smarter users and dumber clients. Heck, I think the whole world would be better off with smarter users and dumber clients/terminals/systems.
Also, please, don't even begin to suggest that uneducated users should be directed to Gopher holes. You know as well as I do that if there's enough people going back to it, some yutz is gonna start trying to figure out how to add streaming this and scripted that to Gopher, and then Gopherspace will be ruined. And sure, on a technical level it would be a neat project to look at. But to reference Jurassic Park, a lot of very smart people have been so amazed at what they could do with the Web and the Internet as a whole, that they never really stopped to ask if they should do it.
I admire the extra touch here :)
The situation is a common one wrt SPAs, routing, and changing a tree based on history state. I'm sure other frameworks have run into it. My brief experience documenting the issue solidified the position that I will never do it again.
This reminds me of the research that went into finding issues in the media plugin models. Essentially, once the security community discovered that Java and Flash, etc, plugins didn't follow the same rules as the browser at all times - it became a free bug hunting exercise until the media plugin model just died.
I expect there are some "side channel" type ways to create high resolution timers in browsers which have removed built in support for them, for instance: WebAssembly? WebGL subroutines?
Everyone who has to deal with n-th layer tech support regularly (where n > 2) knows that even there it's hit or miss. Sometimes you file a bug report and get a "thanks, fixed!" an hour later. Sometimes you spend an hour to gather all the data upfront only to be painstakingly taken through the exact same data gathering process step by step. By email. Over days. On a "4h response" SLA (and they always just barely make it, not considering the value of the "response").
Randall Munroe has the best description: https://www.xkcd.com/806/
That hurts, Jake :(
For example, the request may have the following header:
…which is requesting bytes 50-100 (inclusive) of the resource.