Just about 5 years ago, it was looking to me like it was the end of Firefox. It was Chrome all the way. New features were coming out one after another. Faster rendering. Safe process isolation for each tab. Looked better.
But I just switched back last month. It happened kind of randomly. Saw an announcement of a new release ( 33, I think ), downloaded, re-imported my bookmarks from Chrome and just kind of kept using it instead of Chrome since then.
I like how the tabs look also I think it feels lighter and snappier on my (now old-ish) laptop.
Rust is also one of the most promising languages with a potentially massive impact on the world. Not only safer software, but exposing more programmers to better ways of coding. (Imagine how different the world would be if a popular OS had adopted a Rust-like two decades ago.)
Mozilla is really an important thing these days.
For the main theme, as long as it's customizable to satisfy the 5% of grumpy people I'm fine with that. The other 95% just doesn't care.
(Disclaimer: as a Mozilla employee, that includes my salary :)
a) the "deal" with Google had never happened, and
b) Microsoft came out of nowhere and began offering to pay what Google is actually presently paying (or, let's say, just for fun, even double or triple that), all while
c) Google were making no offer to start paying
... then Google would still be the default search engine in Firefox, regardless.
The same almost definitely cannot be said today.
Is the default search position in Firefox for sale to the highest bidder?
If Bing today is the better search engine, or if it's slightly inferior to Google, or if your perspective is that when Bing debuted it was terrible and it's even worse today--these are all absolutely immaterial. They have no bearing on the answer to this question.
- which is the technically better search engine?
- which will pay the most money? (gotta keep lights on!)
- which is the one the users actually want?
Google used to check all 3. Then only 2 (Pretty sure Bing would have paid more).
Today? I don't know.
Sadly engines like Duckduckgo probably can't afford to pay Mozilla enough to keep the lights on.
Then, it's another very facile thing, but a person involved with Mozilla today who was involved X years ago differs from themselves by X years of experiences. X years' difference also produce X years' worth of changes in the externalities on a project. For example, what impact did the timbre of HN alone have on Mozilla in 2011? What impact was it having in 2008?
Those are some changes Mozilla has been through.
(Note that this is a list of changes Mozilla has gone through, not a "list of bad things about Mozilla". If I were trying to make that kind of list, there are things on there that I wouldn't have put there, and things that aren't on there that should be. But it's not that kind of a list.)
I'd love to support Mozilla as much as possible, but no way I'm keeping the default Google-search when you have stuff like DDG around.
Hopefully me wanting privacy doesn't impact Mozilla's financials too hard. Firefox + DDG only seems like the most natural combination: both are powerful and privacy-centric.
The slogan on their page ("Committed to you, your privacy and an open Web") sounds hollow when considering this and their recent support for DRM.
That means they actually need donations for the foundation to run (but its not 800 employees so its much less money). Legally you can't fund the Foundation with the Corporation money - since that would make 2 Corporation and zero Foundation then :)
I don't know if the opposite is possible (send donations from Foundation to the Corporation) but I suspect it has the same problem/effect.
The Foundation is the sole owner of the Corporation. The Corporation pays dividends to the Foundation.
Now the amount of those dividends can't be too much if the Foundation wants to keep its nonprofit status. "Too much" is determined by how much people donate: the restriction is on fraction of money that comes from non-donations.
I do use both StartPage and DDG, although it's more like a 80%-19% split between these two. The remaining 1% is still on Google to get things like news, and Self-Destructing Cookies  helps keep things clean on that front.
There is a reason that Page and Brin are billionaires: their algorithm was absurdly clever.
For example, '!g hacker news' becomes 'https://encrypted.google.com/search?hl=en&q=hacker%20news'.
"I find this better even just for the fact that I don't get the filtered down view of the world anymore - if there are things named similarly than the ones I use in the world, I want to know, so I can adjust my own naming."
It wasn't a loaded question or anything, not sure why I got down-voted either... (shrug)
Take a look at http://dontbubble.us
I've been doing that for a few years now. I only need to log in for GMail, basically. At once or twice a day max, it's not a big hassle at all.
In particular, I find that YouTube is much more useful if you're logged out. At least then the sidebar contains related or similar videos, instead of the stuff I watched last week.
However, how does Startpage make money?
In fact, how does DDG plan to make money?
But soon I'll be moving from iOS to Android, the wonderful land of being able to change your default browser. And I'll be using FF there for the bookmark syncing. Firefox on 2/3 devices ain't bad.
There was a lot of hate over the australis redesign, but I like it better than Chrome.
(Edit: Doesn't look like it. Alas.)
Some people do exist that have Linux desktop / Android smart device as the only computers we can change.
Firefox, Chrome, Opera and others are not so picky about which operating system they run on.
As this is technically a very stupid move, it feels like the only way to interpret this is that Google wants more people giving up privacy, for it to be normal.
Also see: Chrome flips around OK/Cancel in the DNT enable dialog, purely to confuse and prevent users from enabling DNT.
As far as Chrome: go compare. Enable things like spell check, which put up a normal little explanation. Then try DNT, which elicits a scary disclaimer and swaps the OK and Cancel buttons.
Now for surface I hear you - i was sad they discontinued the work for that. Even on touchscreen latpops.
Rust programs are not 'safe' in the sense that they cannot crash or have security bugs, but they are 'safer' in that the compiler automatically performs a series of safety checks to prevent common errors.
1: Widespread meaning OSes and popular platforms, not custom LOB apps where SQL injection exists on the login page.
It is effectively impossible to have an unsafe memory access error in Rust
Rust just has less of them, because it has a smart compiler. ...but it's not right to suggest that it has none.
There is no way of ensuring that a rust program does not result in a segmentation fault or other memory or race condition as a result of unsafe code.
There is no way of ensuring a rust program does not contain any unsafe code.
There's no way to ensure the compiler is perfect, but rust itself is not capable of certain types of errors.
You can always code the wrong algorithm, but that doesn't mean you can segfault or use after free (in the default safe mode you almost never have an excuse to leave).
You are flat put wrong, and spreading misinformation about it doesnt help anyone.
If you use any rust, and that includes dependencies and the standard library with unsafe code, bugs in the unsafe code can and do cause segmentation faults.
Its easy to say, 'well, thats a bug in the library, not a problem with rust', but thats the same as with C++ isnt it? If you can assert any code is 100% bug free then why do we care about the nice safety features in rust?
What is true is that any 'safe' code path that never enters an unsafe block in rust proveably cannot result in certain types of failures.
BUT every rust program uses unsafe code. In the standard library. In c bindings. In 'safe' pure rust dependencies (with hidden unsafe blocks). In loading dynamic libraries.
Its completely unavoidable.
What are going to do? Vet every line of every part of every dependency in the code you use? Dont be ridiculous.
Do you use rust?
..because practically speaking it does crash. Not often, sure. ...but this falacy that rust is 'provably safe' is absolutely false. Its provably false.
Thats why people saying it is unfortunate; it makes the rust community look like a bunch of clueless fanboys.
Please stick to reality. Rust has a zero cost memory management strategy and a smart compiler that helps to prevent certain types of common errors.
We dont need to step into magical fairy land to convince people rust is good. It stands on its own merit easily enough.
In other words I'm talking about code the programmer makes themselves.
>Its easy to say, 'well, thats a bug in the library, not a problem with rust', but thats the same as with C++ isnt it?
In C++ nothing prevents the lines I write from having memory errors. It's not the same at all.
>If you can assert any code is 100% bug free then why do we care about the nice safety features in rust?
Oh well you shouldn't do that, but you also don't have to use unsafe code willy-nilly. In C++ everything you touch is unsafe unless proven otherwise.
>What are going to do? Vet every line of every part of every dependency in the code you use?
For unsafe blocks? Sure, that's easy.
Thats the point.
You cannot assert your rust program cannot crash.
Even if your code is perfect, there may be bugs in either the std library or some dependency you use that does crash.
Im not saying everyone uses unsafe code (or should) in their own code. Far from it.
Im saying that every rust program invokes unsafe code at some point.
So this myth of the 'pure rust' that is 'completely safe' is just that. A myth.
I dont understand why this is difficult idea for people to accept. Just use the good bits of rust. rust doesnt need to be 100% safe; its not, and thats completely ok.
>I dont understand why this is difficult idea for people to accept. Just use the good bits of rust. rust doesnt need to be 100% safe; its not, and thats completely ok.
It's okay for now, but I'm eagerly waiting for a version where the important pieces of unsafe code can be formally verified. Give me safe unwinding, memory allocation, and sockets, and I can cover half the world.
As far as I know this isn't an especially difficult request. I could probably cobble together something right now by borrowing bits of verified C and gluing them to library-limited rust.
I've been writing Rust for well over a year. I like to abuse new features and I've found many compiler bugs, but my code doesn't crash at runtime.
I'm not saying it doesn't happen, and the plural of anecdote is not data, but I think you're grossly misrepresenting Rust's practical safety benefits. That you only have to trust code in unsafe blocks, rather than all the code everywhere, is a huge benefit.
maybe once a fortnight for me?
I'm certainly not trying to bash rust, and I do apologise if it comes across that way.
I just think a bit of realism makes everything look much more sincere and plausible.
As you say 'Rust cannot crash' is false. 'Rust has never crashed for me' could well be a completely true thing to say. Also, 'Its so much easier to write rust code (than say c) that doesnt crash!'
I completely ok with all of those.
..but 'you can do anything in rust and its always perfectly safe!' or 'rust programs dont have to worry about security issues' or 'It is effectively impossible to have an unsafe memory access error in Rust'?
Those are people being enthusiastic (good) but unfortunately spreading misinformation (bad) and making the rust community look bad (very bad).
I just wish people could be excited about the the things that are actually exciting about rust. I feel like this whole safety thing is a massive distraction.
fast, low level, concurrent and managed memory with no cost is both accurate and exciting about rust.
'helps avoid bugs and race conditions' isn't very exciting to me, but I acknowledge its important.
I guess 'completely provably safe!' is exciting to some people; but since its not true, Id prefer not to get people excited about rust that way.
The graphical bindings you're using are not part of the standard library, which is why I specifically asked about that. I know there are bugs in third-party dependencies in Rust, because there are many C bindings that aren't exposed safely by those libraries. I've segfaulted using a TrueType binding library, for example, because it was not actually exposed in a way that prevented double frees. But writing a bad binding is something you can do just as easily in Ruby, or Java. The standard library is what we were originally talking about. I wouldn't disbelieve you if you said you crashed every fortnight using only standard library code, but I would probably press for details.
I am not saying Rust is "completely provably safe", but nothing is. You always have some trusted software or hardware that, if it screws up, will compromise your program. Rust's advantage is that it allows you to be explicit about what parts are trusted and what parts aren't. It vastly reduces the potential attack surface.
That's why effectively impossible is an OK statement. Unless you go out of your way, your program will not contain such bugs.
Rust has many other unsafe code paths than ffi; low level optimisations, dynamic libraries, etc.
Unless you go out of your way or are doing low level work, your code will not contain such bugs, and if you used no dependencies that do anything meaningful, what you said is plausibly true.
...but what are we trying to argue here?
That you can build a contrived rust program that doesn't crash?
Or that if you build an arbitrary program in rust, using arbitrary dependencies to do meaningful work (that will invoke a c library at some point, and talk to device drivers), that it wont crash?
In my view 'effectively impossible' is faaaaaar over stepping the bounds of reality.
Improperly implemented `unsafe` blocks can cause crashes. APIs that don't properly isolate unsafe interfaces can cause crashes. Bugs in the compiler, bugs in LLVM, and unforeseen unsoundness in the type system can cause crashes. So instead of saying "Rust makes crashes impossible", I'm starting to prefer "If you write only safe code, any crashes that occur are not your fault". A bit less comforting, but still a best-in-class guarantee for a bare-metal language (not to mention that the former claim is impossible in any language).
Furthermore, I think it's important to express to people the true role of `unsafe` blocks, which are not so much "Rust without safety" as they are "reified inline C code with a bit more safety". Rust without `unsafe` blocks could exist, but it would require an enormous amount of FFI and/or much more machinery baked into the compiler itself.
Sure there is. There's a compiler lint available which can disallow "unsafe" code, i.e. code which is able to create null pointer errors (and thus segfaults).
Of course, the standard library will always contain a bunch of unsafe stuff - but at least it's shared between every project, and any bugs can be fixed once, and assuming it's correct any non-unsafe code that depends on it is memory-safe.
I see it as a hubris against the idea that Google claimed to not be evil, and the fact that evil is such a poorly defined concept, and whether or not Google meets the bar depends entirely on the values of the perceiver. Many perceivers, for instance, miss the distinction between "Don't be evil." and "Don't do evil.". The former is a mindset, strategy, and intention, while the latter is impossible for a corporation with 50k employees.
Then, the only logical conclusion for someone who hasn't grasped all of the above is that Google is becoming more and more evil every day.
What's the mindset behind the broken permissions on Android? Where any app that wants to change behavior when you get a call must request permission to your IMEI and calling/called number? Or why the broken, upfront, all or nothing model is still even used?
What's the mindset behind G+'s incessant nagging, and forcing it as a requirement to even rate apps on Play? Or the same for YouTube, etc.? Not to mention the "real names" debacle.
At what point are we allowed to say Google's mindset is not "don't be evil" as far as external observers are concerned? Or will everyone that brings this up always be labeled as unable to understand?
Chrome dev here. The way the Chrome settings web UI is written does not lend itself to strong consistency, just eventual, as devs notice it and fix it. Your DNT example was fixed last week in
Another example of inconsistent button ordering: the overlay for disconnecting a managed profile has its buttons reversed from the usual order, while the overlay for disconnecting an unmanaged profile does not.
I guess that just leaves the fundamental incompatibility with Google's current business model and personal privacy.
Hanlon's Razor is a good rule to apply here.
Where any app that wants to change behavior when you get a call must request permission to your IMEI and calling/called number? Or why the broken, upfront, all or nothing model is still even used?"
At the time android created its permissions model, most of these issues were not obvious, or it would have been done differently.
Remember, of course, that prior to things like android (the first version of the iphone only had webapps), permission models of any sort were pretty much unheard of. Flip phones running java apps, or blackberries, had apps that got to do whatever they wanted.
Permissions changes are being slowly made in android.
The same way you'd slowly change most serious things about something with billions of users.
It's not like C++ or Java just release new features every day (even if we may want them to :P).
This is of course, the same as any large system in engineering.
I don't know enough to comment on the rest.
At what point are we allowed to say Google's mindset is not "don't be evil" as far as external observers are concerned? Or will everyone that brings this up always be labeled as unable to understand?"
It doesn't matter. At some point, every company large enough will lose its sheen, and people will worry about it, and eventually question its motives.
Nobody can be perfect at doing the right thing all the time, even if they wanted to. Eventually, even with the best of intentions, mistakes add up, and people stop believing.
In fact, i'd wager it happens slower if you don't even try to have good intentions, and and just stay under the limelight, rather than try and occasionally mess up.
In any case, I guarantee the same will happen to Mozilla (or whoever we want to peg as the current defender of the world) over time, the same as it has happened in the past to every other company. Non-profitness won't save them.
In fact, if your app isn't signed by Nokia, you can't let an app make a request without nagging you for permission. This totally kills homebrew.
MIDP 2.0 had permission domains. In practice, the permission domains were basically "want this app to let you do anything on your phone Y/N?" for a lot of phones.
In the specific case of S40, Nokia's security policy came into play in 6th edition feature pack 1, or so Nokia claims.
For fun, look at the deviations different carriers (and editions) have at http://developer.nokia.com/community/wiki/Java_Security_Doma...
"Trusted 3rd party domain" is everyone who gave heaps of money to Verisign. They get no permissions by default, but they can request, for example, network access and the user can then grant it once, per-session or always.
"Untrusted 3rd party domain" is the rest of us, and basically any app I ever installed, in which case the user is prevented from selecting "always allow" for network access and is prompted once per session, which was highly annoying.
So if anything, it was too secure! Sun sank their own standard by requiring expensive certificates for normal functionality. If they had used self-signed certificates they way Android does (checking on upgrade that it's the same certificate) it would have been great.
"Operator protection domain" and "Manufacturer protection domain" mighty work differently, but that's no different from the stuff that comes pre-installed on Android phones having access to everything without asking.
Android permissions aren't great. When Android was being designed (before Google bought it), the permissions were a huge step forward from desktop apps, which can still do anything at all.
Arguably the very concept of upfront permissions is inferior to asking when needed, but attributing malice to the choice is silly.
It'd also be really hard (or even impossible) to change without breaking all the apps out there.
G+ is annoying, indeed. They were aping Facebook with the real names thing and they should've known better.
It doesn't matter. At some point, every company large enough will lose its sheen, and people will worry about it, and eventually question its motives.
Nobody can be perfect at doing the right thing all the time, even if they wanted to. Eventually, even with the best of intentions, mistakes add up, and people stop believing.
I guarantee the same will happen to Mozilla (or whoever we want to peg as the current defender of the world) over time.
None of these things say "evil" to me even remotely.
No, we perceive the disctinction just fine. Some of us just believe Google does "evil" with mindset, strategy, and intention...
What possible motivation could they have for being "evil", comic book style?
Just greedy capitalist style, and "patriotic" pro-US-interests style.
What possible motivation does anyone have for doing evil?
No, I don't. the NSA isn't magic or the Illuminati.
It's quite common that intelligence agencies use front companies to do their work. Look it up.
Yeah, makes perfect sense.
See my other comments, but things like swapping OK/Cancel when a Chrome user attempts to enable DNT fit right into "evil". Really, how does one excuse adding a dark pattern like that? Even though DNT is bad, Google shouldn't resort to trickery to juke the stats.
Edit: A Chrome dev in this thread says this was fixed and just an artifact of how the UI code is in Chrome. Well that makes me look foolish.
I'm not a huge fan of the evil terminology, as it sounds melodramatic and it's easy to dismiss anyone saying "evil" in such contexts. I use it because Google chose that word.
On a less objective point, everytime I use a Google service, I feel icky. It's bad enough that I'm working on the one missing app I want so I can switch to Windows Phone. (MS has problems, but between their internal legal oversight and inability to execute, I don't feel icky with them. Apple is also a choice, but I just really hate the UI and poor development tools.)
One of the co-founders is still the CEO. He is one of the world's wealthiest men. He doesn't need the money and I doubt he feels the need to put shareholders first.
Why would he let Google stray from an ideal he ascribed to previously? Has he been corrupted by having more money than he could possibly spend?
Google is a for-profit company that makes money selling your data and targeted, personal ads.
Mozilla is not-for-profit and just wants to make the web better.
1. disrupt IE/MS - i can say for certain they originally backed Firefox for this effort and only decided to split away to build chrome in the first place because they felt starting fresh they could build a better core and i believe they did.
2. enabling more people to build on the web, enables more of their ads to be shown. Firefox achieves this just as well as chrome. IE was dominating not too long ago and you could argue much of google's ad revenue growth can be attributed to more people having access to a higher quality web.
That said... I don't see why you should believe google would need or desire to sell data from what it might collect from Chrome. More likely they see it as a means to ensuring web dominance by ensuring the web is never locked down by one mega corp. It's similar in away to what they have done in the mobile space. Android is more of a technology to disrupt Apple and ensure it can't be dominate, but really does google have any control over Android?
https://support.google.com/chrome/answer/1181035 contains info on how the encryption works.
chrome://terms/ links to https://www.google.com/intl/en/chrome/browser/privacy/ which links to http://www.google.com/policies/privacy/ to define "how we use information we collect." From that page: "We use the information we collect from all of our services to provide, maintain, protect and improve them, to develop new ones, and to protect Google and our users. We also use this information to offer you tailored content – like giving you more relevant search results and ads."
Your full browsing history is a treasure trove of information useful for making Google's core services (search and ads) more effective. They would be stupid not to use it to improve the quality of their services. I challenge your assertion that Chrome is an altruistic endeavor.
"Get predictions in the address bar" https://support.google.com/chrome/answer/95656?hl=en
"Logging policies for omnibox predictions" https://support.google.com/chrome/answer/180655?hl=en
Even Firefox' Mobile Awesomebar doesn't do that unless you click that "Yes" button.
On the other hand, Google's Chrome browser is clear about the fact that it does send everything in the omnibar:
> When you type URLs or queries in the Chrome address bar (omnibox) or App Launcher search box, the letters you type may be sent to your default search engine so that the search engine’s prediction feature can automatically recommend terms or URLs you may be looking for.
If you still believe you are right, I would be interested in seeing your sources.
Obviously, I can't prove that it doesn't send search suggestions by giving you a link to the code, since it isn't there. If you want to make sure for yourself, I advise using Wireshark.
It could be said Google already have your browsing history (of sites that they serve adverts on, or that use their analytics). I doubt Chrome's syncing data would give them any more information than what they have already.
and it's quite easy to block Google from tracking your browsing habits using GA or Ads: just use an Adblocker and something like Ghostery, RequestPolicy or Disconnect to block Google Analytics.
2) It's not an accident that Google's webservices work best (sometimes only) in Chrome.
They're way past the "disrupt IE" goal. They're into the "tightly couple our web service and Chrome and try to force out other options" goal.
Theoretically Google could sell that space. They trade this potential profit to promote their own product, at a loss, of how ever much that space could be sold for. This is like economics 202, opportunity cost.
I can't find a source for the figure right now, unfortunately. And the whole thing is a guess, since Google doesn't release this information. All it releases is overall sales/marketing spending, which in 2012 was about $6 billion if I understand right (see <http://www.quora.com/How-much-does-Google-spend-on-advertisi...). That includes salaries for the marketing folks, etc, not just direct spending on campaigns.
As I recall, the $1b estimate broke down something like 30% actual spend (primetime TV ads, ads all over the London Tube, etc, etc) and 70% in-kind placement (i.e. "every search you do on Google with another browser shoves an ad for Chrome in your face"). I'll see if I can hunt down where I saw that...
> That's 3 times Mozilla's entire budget
As for Google Search ads, let's take the number of search requests, an example CPC they give, an example CTR they give, the StatCounter portion of non-Chrome users, we get 1216373500000 * (1-0.3) * $0.10 * 0.005.
That's $425,730,725 for a one-year campaign in 2012. Given the prominence of this ad (and its unintentional scare value), the CTR is probably off, so that's a very conservative figure.
If that is indeed 70% of the whole campaign cost, the total is $608,186,750 per year.
A web service that only works in Chrome? Maybe you mean web application (web service would be really odd to work in just one browser). Do you have a source for this regardless? I hadn't heard of this.
As for concrete examples, Hangouts only works in non-Chrome browsers (including ones with WebRTC support) if you install a Google-provided binary blob. Which you may not be able to do.
Gmail only supports offline access in Chrome (see https://support.google.com/mail/answer/6557?hl=en the "two exceptions" bit). Whether not having offline access to your mail counts as mail "not working" is up to you, I guess; for me it counts as "not working".
Various Google properties use UA sniffing to deliver degraded content to non-Chrome browsers. https://bugzilla.mozilla.org/show_bug.cgi?id=921532#c9 is an example.
https://bugzilla.mozilla.org/show_bug.cgi?id=973754 is an example where as far as I can tell they built the feature around non-standard Chrome-only functionality even though Firefox supports the standard version.
Google news menus don't work in standards-compliant browsers because they rely on a Chrome/WebKit bug. See https://bugzilla.mozilla.org/show_bug.cgi?id=1083932
Google patent search uses UA sniffing and locks out various browsers as a result. See https://bugzilla.mozilla.org/show_bug.cgi?id=1013702
Google Translate will fail to work in Firefox unless you have Flash installed (good luck on Mobile).... or spoof the Chrome UA string. See https://bugzilla.mozilla.org/show_bug.cgi?id=976013
They do fix these bugs sometimes (the UA sniffing ones, where they just got the sniffing flat out wrong, tend to get fixed once someone diagnoses them). And sometimes not.
The Google Hangouts website uses some carefully-constructed language to imply that people must download Chrome to use Hangouts, even though a Hangouts NPAPI plugin supposedly exists:
The Hangouts Chrome extension won't work in your current browser. You'll need to
download Chrome before installing the Hangouts Chrome extension. Do you want to
download Chrome now?
Then you ask yourself why the goals were set the way they were. Obvious guess at an answer: because they only want to target "mobile" and Android+iOS cover most "mobile" clients. Had iOS had less market share, I will bet the goal would have been Android-only (modulo advice by lawyers based on antitrust worries in that situation, of course).
No malice anywhere along here, but the end result is not so distinguishable from malice, sadly.
That's the other easy explanation as to "why iOS?"
Note, not picking on you, just Internet in general.
The old saying was "Windows is not done until Lotus won't run", and the DR-DOS case shows it might not have been an exaggeration
Google/Chrome also works hard to support "what's already out there", which is what you mention, but what we're discussing now is building on your supposedly-open-but-really-proprietary platform _against_ other players, whether that's policy or coincidental with constraints.
Microsoft spent effort making sure Windows itself WON'T run on DR DOS before that. They constantly ignored web standards after they won the browser wars (conveniently working well with Microsoft tools that produced non-standard markup, though) ... up until the moment they lost them again, at which point they started to pay attention to standards again.
With their last few releases, Google seems to be adopting this Microsoft style of evil. Some people defend that, and some don't.
That said, I agree that more FxOS bits need to end up on standards tracks. The permissions issue really needs solving to make serious progress there.
If there was a web standard way of caching 5GB of files locally then I would be annoyed.
I wouldn't let Google off the hook so easily.
More like, they want to ensure that if/when it is, they are that one mega corp: https://i.imgur.com/AIxYzl9.jpg
If you ask me, the only reason they haven't moved even faster in this direction is because they're afraid of triggering the same legal action that Microsoft did back in the 90s with IE, but that doesn't mean that they wouldn't love to have that form of dominance - it can only help them.
But man, Chrome sure got good fast.
Besides, Chrome didn't just introduce V8, it also introduced a cleaner UI, sandboxed tabs, and eventually, a much better set of DevTools than any other browser.
To deny that competition from Chrome didn't put pressure on other vendors I think is trying to willfully discount it's contributions for political, not technical reasons.
Chrome obviously put pressure on other vendors in various areas, including performance. What just isn't true is that without Chrome there would have been no JS performance competition. Whether the competition would have been as intense as it ended up being is a debatable counterfactual; I believe it would have been.
One other historical note, since you brought it up: Chrome was first announced publicly Sept 3, 2008. The first public beta of IE 8, with tabs in separate processes, was released on March 5, 2008. These processes ran in a low-privilege sandbox, as in fact did the entire browser starting with IE 7 (released October 2006), on Windows Vista or newer. Chrome did provide the first browser sandbox on Windows XP and non-Windows platforms, which was a big step up, of course. Again, there is a difference here between "introduced a new ground-breaking concept" and "incrementally improved on what was already going on".
Oh, and V8 was clearly considered a "threat" by other browser makers way before Chrome had any market share to speak of, so I'm not sure what your remark about Safari is supposed to mean.
The problem with this discussion is inherent in any counterfactual history, as you point out, and the common issue of what appears to be revolutionary to people outside a domain vs merely implementing things people have discussed for years, as the same "revolution" appears to people inside that domain.
The reality is that circa-2008 JS engines were fairly rudimentary, including (or especially) V8, and your average JIT compiler writer at the time would have been less impressed and more compelled to question why this "revolution" hadn't happened years earlier (and several did ask exactly that).
All that said, "Chrome put pressure on other vendors" has been extremely important to the evolution of JS performance, especially with Crankshaft -- I think it's extremely likely that shipping Firefox would still have a tracing JIT today and would be just starting to move past it without Crankshaft having existed -- but Chrome in that slot vs the other major browsers is really not that meaningful a distinction either, as without SpiderMonkey existing, it's likely that V8's major strides forward would have stopped with Crankshaft in 2010. Performance would have continued to improve in either (especially with JSC making big improvements), but I don't think we'd have seen the major architectural changes as often as we have without that pressure. Competition is great.
That it's a performant browser is a side-effect.
My argument is that nobody would have put that data online in a slow and sucky web. If you think Google wants full control over the audience, surely they also want a large audience.
(Note that I do not think that Google is evil. But if they were, I don't think they would need chrome to get most of that information anyways, given all the other ways they are collecting data. Sure, there are some corner cases they would miss but I don't think the incremental coverage would be worth the effort.)
I really like Google, but I want to support open source(and am not quite willing to put up with Chromium). But for me, core usability is still king, and nothing really touches Chrome for that.
I like being able to hit a, see Amazon.com hilighted, hit tab, and have that be searching Amazon.
Or to do the same thing with GMail, Wikipedia, Youtube. It's the core functionality I like most with Chrome.
Firefox has something similar: "Keyword Searches" bookmarks. I define my own search shortcuts, such as "am" for Amazon, "imdb" for IMDB, "nf" for Netflix, and "w" for Wikipedia. Using "am Harry Potter" or imdb or nf or w will use those respective website's own search forms to find "Harry Potter".
For example: I have a folder in my bookmarks containing like Youtube which is this url https://www.youtube.com/results?search_query=%s&oq=&gs_l= associated with the keyword "yt" so I can type yt <search term> and it loads the Youtube results. I've also added one for the Mozilla Developer Network (mdn <search term>), Python Documentation (py2 <term> or py3 <term>) and I add others as I use them more.
DDG bangs works with all large website or search engine (!g for google, !w for wikipedia), and with smaller, specialized one as well (I can look up words definition in the Trésors de la Langue Française Informatisé with !tlfi , and a particular protein in the Protein DataBase with !pdb).
If you use the official extension, you also get search suggestion out of the search box
To... other advertisers? Google is the advertiser. It not only goes against their Terms of Service to sell that data, but also makes zero business sense.
I swear people just make things up when it suits their world views.
Would you even considered that if that one provider was anyone but Google? No? If so, why do you give Google a free pass?
Mozilla does not respect the rules of my native OS. For over five years, they insisted that double-clicking the upper-left corner of the window should not close the window like every other app on the system. Nope. They insisted on doing it their way ignoring the complaints of thousands of users.
Firefox also makes you press Shift to use access keys. Totally non-standard amongst browsers and OSes and annoying for devs and users.
Is Firefox the only browser left that hasn't switched to multiple processes?
No, Firefox isn't better and I don't trust Mozilla anymore than Google.
I just spent an hour trying to figure out why ff freezes on the partner's computer. Yay 780mb sqlite wal. Who the fuck knows what that's doing. Why on earth does using a browser require vaccuming sqlite files? Dunno, because there's no good reason.
If she'll finally switch to chrome I'll stop hearing complaints the internet is slow.
Most things are retained, even cookies. It does remove some things (often things that can cause problems, like badly out-of-date-unmaintained extensions, clears DOM storage, download history, plugin settings):
I'm all for having clever damage control mechanisms, but having less damage in the first place seems to be the winning strategy.
It also makes everything about the interactive experience worse, or at least it did last week. `perf top` shows a pretty damning story around locks, too. So- we're probably not going to see it until 2015.
FWIW After re-reading my post I believe your mistaken impression was due to lack of clarity on my part, not laziness on your part.
 Octane (google's benchmark), Sunspider (Apple/Webkit's benchmark), and Kraken (Mozilla's benchmark).
You can see JSC performing better than chrome on the asm.js benchmarks: http://arewefastyet.com/#machine=12&view=breakdown&suite=asm...
But are asmjs benchmarks interesting? They are not representative of the vast majority of real-world JS, so wouldn't an asmjs-laden benchmark suite really be a case of optimizing for your own set of benchmarks, tuned to your own idiomatic-JS?
But anyway, congrats on the achievement. I like the fact that V8, JSC, and FF performance are converging. If the performance differential is too great, it creates additional headaches for the developer targeting a certain level of efficiency.
But asm.js execution is very different from JS execution, even in browsers that don't have specialized asm.js paths. Executing regular JS is all about balancing compile time and garbage collection with code execution. asm.js barely uses GC, and allows lots of opportunities to cache compilation in ways that would be invalid for regular JS. So the whole space of tradeoffs is different.
Over the course of a day, the browser becomes unresponsive and CPU usage idles at 10-15%. Restarting with the same tabs brings it down to 0%. Yes, I know, disable addons, blah blah...doesn't work for me. Same problem.
I'm really looking forward to the new threading model coming up. I have a feeling that once each tab has a thread, things like this will be much more self-repairing. It's not always easy to kill a rogue execution path in an event loop, but killing a thread is pretty straightforward =].
Also, congrats on the firefox team for really taking performance seriously.
It resets your profile while preserving history, cookies, bookmarks, etc.
> I'm really looking forward to the new threading model coming up. I have a feeling that once each tab has a thread, things like this will be much more self-repairing. It's not always easy to kill a rogue execution path in an event loop, but killing a thread is pretty straightforward =].
First, it's "process", not "thread" :)
The plan is to start with just two main processes -- one for chrome (browser UI, mostly) and one for web content. So no processes will be killed in normal operation. This is because additional processes incur certain extra costs, particularly when it comes to memory consumption.
Still, it might help with your problem; it's hard to say for sure.
Do you have any plug-ins or add-ons installed that might be the cause of this problem? If you do I suggest you disable all plug-ins and then slowly re-enable them to figure out if any one of those is the culprit.
Kudos to the developers.
And umm, not sure what I'm doing differently but I usually keep Firefox open for weeks at a time and have no issues. Though I cut back a bit on keeping tabs open, I used to keep like 60-80 tabs open all the time but now I usually clean up when I'm done with something (more to reduce cognitive overhead than resource utilization).
Some people prefer to use bookmarks for that sort of workflow (as they are intended)
Really the only time I'll have more than 5-6 tabs open is when I'm trying to read through an API documentation and I want to be able to reference multiple points quickly.
120-200 in each Chrome, FF, Safari.
How do you find the tab you want?
What happens when you're on another computer?
Laptop goes everywhere with me. 99.5% of the time I'm not. But that other computer also has fuckton of tabs open. lol
What happens if you accidentally close Chrome, do you open all 200 tabs again?
Yup. Various plugins make it less painful but it does hurt. I make sure not to accidentally close the browser.
What do you use 18 tabs of hackernews for?
I often have the front page in one and my threads in another.. then at least a few comments pages in others.. but never more than 5-6.
It rocks. And I have 1200 tabs open at the moment on Linux64 Nightly (~120 actually loaded). Yes, I'm a tab-hoarder. Bookmarks are useful, but don't give me the easy "I want to come back to this" aspect - once I do close it, I'm done with it; if I bookmark it it's there forever (and the number of bookmarks became unmanagable eventually).
Every so often I do a pass and close out 50 or 100 tabs I no longer care about. Usually I sit at 900-1000; it's time to cull.
Right now you can search the history, but it searches only the title and basic metadata, and as we know well from HN, titles are often wholly unrelated with the content.
So many times I've found a page that has interesting information, and I can easily remember snippets of the page, but without walking through my history manually for hours, I'm left with trying to remember unique phrases and instead searching the world of information on Google, having to winnow through a lot of chaff. It would be so great if a browser (or an evil cloud-synced variant) generated a search corpus of every page you visit -- understanding the overhead and costs, made viable by many cores and massive IO performance -- allowing you to say "I saw a site about banking regulation and overcommitments in the past week...where was it?"