I was once involved in an IE6 -> IE8 upgrade for a Fortune 100 corporation. It took nine months to analyze all the possible impacts and implement mitigations before the first internal production release. And that was fast on account of teams being forced to particpate by C-officer mandate.
By the time the deployment was finished, which required several more months, IE10 was mainstream in the Real World.
That has never been true.
The hardware and software changes. The infrastructure it connects to changes. But most importantly, your workflows must evolve if you are to remain efficient.
If you roll your own software, you either commit to constantly updating it, or you commit to throwing a ton of money down the drain when its rots away to the point where it is no longer fit for your workflow or for the world it operates in.
Software is never finished, only abandoned.
Case in point: many of these same companies that refuse to upgrade systems for a decade or more are still running Windows XP and Server 2003. Due to the negligence of their IT leadership, they are now vulnerable to all types of security vulnerabilities that won't be patched, and have put both their business and their customer's private data at risk.
This is no different than a state or local government refusing to maintain critical infrastructure like bridges and tunnels. Negligence might be viewed as conservatism for a few years, but when the bridge collapses because routine maintenance wasn't performed, it becomes clear that the administration who neglected maintenance was at fault.
The sooner the cancerous idea, that IT software can be frozen in some golden state of perpetually providing business value with zero maintenance required, dies, the better.
Modern, thoughtful IT leadership realizes the value in a continuously updated, secure browser. Chrome is already winning the browser war in the Enterprise for this reason alone. This just provides additionally needed controls like white-listed extensions that are known to be safe.
> Case in point: many of these same companies that refuse to upgrade systems for a decade or more are still running Windows XP and Server 2003. Due to the negligence of their IT leadership, they are now vulnerable to all types of security vulnerabilities that won't be patched, and have put both their business and their customer's private data at risk.
There are legitimate reasons for this though. The way I understand it, a big chunk of the problem is that many software vendors refuse to properly separate security related updates from regular updates.
Firefox at least has an ESR version, AFAIK there is no equivalent for Chrome.
See also attitudes like that mentioned in: https://news.ycombinator.com/item?id=9164251
Similar examples can be given for MATLAB - I know of many professors who ask students to run some old and fixed version of MATLAB for many years - they do not want to waste their time dealing with the breakage that inevitably happens with newer releases.
If all vendors actually put in effort into maintaining LTS versions, the situation could be different.
I do agree that the idea of perpetual business value with zero maintenance in a frozen state doesn't make sense - but as a business, I would always be interested in minimizing my maintenance burden.
> but as a business, I would always be interested in minimizing my maintenance burden.
The argument that I'm trying to make is that by deferring maintenance until it is too late, you're actually dramatically increasing your maintenance burden in the long term. Testing and maintenance fixes for your app to support Chrome updates, which rarely break anything, will be much lower over a 10+ year (or probably even 5+ year) timeframe than just sticking your head in the sand and making someone 10 years from now pay an astronomical cost to refactor the app completely.
If maintaining critical infrastructure like bridges and tunnels worked like deploying software updates, every few weeks the bridge or tunnel would close itself to traffic in one direction for a while, without warning.
Every now and then when the tunnel reopened, it would feature a loop-the-loop hidden inside the mountain. While it looked the same from outside, it would therefore become completely impassable by large amounts of the traffic that used to rely on it, including ambulances, food supplies, and the CEO's limo.
Every couple of years, your world famous suspension bridge, used as a widely recognised landmark by millions, would be redesigned more like a Roman aqueduct. With traffic driving on the opposite side of the road. And a high speed railway line running right down one of the traffic lanes. With warning signals that are supposed to alert motorists to the danger, but in practice don't because none of the motorists know what the funny symbols mean.
When software engineering is at least on the same planet as civil engineering, and software updates are subject to the same standards of oversight and approval before being deployed, we can talk. Until then, organisations can and will see software as a tool that is there to do a job, and view updates to working software as a high risk that isn't worth taking if it might stop that software from doing its job. It might be inconvenient for the software developers, who naturally would prefer everyone to bend to their will and never to have to maintain anything older than five minutes, but those software developers are going to have to realise at some point that the world doesn't revolve around them.
They already do this by closing lanes, and even closing entire bridges at night during maintenance when required. The "without notice" part is only if you're doing it wrong - you should be giving your users advance notice of any maintenance activity.
> Every now and then when the tunnel reopened, it would feature a loop-the-loop hidden inside the mountain. While it looked the same from outside, it would therefore become completely impassable by large amounts of the traffic that used to rely on it, including ambulances, food supplies, and the CEO's limo.
I'm not sure I buy this argument. The failure modes of bridges are well understood and usually life threatening. The failure modes of software are not as well understood, but are rarely life threatening. If you're updating pacemaker software, yes, take extreme precautions and don't introduce defects, but the amount of testing and preparation should depend on the impact of a potential failure.
> Every couple of years, your world famous suspension bridge, used as a widely recognised landmark by millions, would be redesigned more like a Roman aqueduct.
This seems like a false equivalency. Suspension bridges take years or decades to design and build. Software can be built in weeks or even days. The frequency of major infrastructure changes is directly correlated with the time and effort involved in deploying them, in both civil and software engineering.
My previous comment was not really about the realities of civil engineering. It was about how absurd civil engineering would be, if the routine maintenance done in that context failed as often and as spectacularly as software updates do.
To be more blunt about it: Organisations don't want software that updates itself frequently and fallibly, because they simply can't afford to have basic functionality going out of service every few weeks, complete and possibly permanent loss of compatibility with critical services from time to time, and the design and UI changing at random in ways that are confusing to users.
The entire reason to do frequent, small updates, instead of large, spectacular updates is to avoid all the problems you mention in the last paragraph.
In that case, it actually does become quite a bit like civil engineering, in that if they keep relatively minor repaving a lane every year, without disrupting traffic completely, they can avoid the entire road failing spectacularly and having to be blocked off to rebuild. Of course I don't really know much about civil engineering...
So the theory goes, but I'm not sure there's really any such thing as a small update if you're talking about software used by hundreds or thousands of staff that provides the platform on which tens or even hundreds or important business applications are built.
If you're talking strictly about security updates, which are intended to address an identified vulnerability without making any other change, then I would agree there's an argument for making those more frequently.
Unfortunately, many of the key players, including those producing the evergreen browsers, make little or no distinction between essential security fixes and other changes, and at that point the risk/reward ratio of accepting frequent updates can change rather sharply.
Well, it's not life threatening, but what do you think will happen if every month your credit-card stops working for a few days because of a Windows/Linux regression?
At the upper echelons there is a real appetite for change. Banks realise that they have become software companies, but that their businesses are groaning under the weight of antiquated systems and processes and it is holding them back.
But even the board can only affect so much.
The only reason an auto update would take down an app is if all development has stopped. As part of the process you should be testing against upcoming releases of chrome so you spot these problems well in advance.
But if dev has stopped, then yes, you will have a problem.
Not all software is under your direct control. Once business processes achieve a stable working state, it is imperative that they maintain uptime, sometimes in excess of six-sigma uptime.
Imagine your business is NASA. Are you going to let mission systems auto-update 18 secs before launch?
You can choose to neglect paying on your debt for some period of time, but it always comes back, more expensive, later. Case in point: the IT leaders that refused to upgrade from Windows XP and Server 2003 are now paying astronomical prices for "extended support" and "migration/rehosting." If they would have simply invested in maintenance to ensure compatibility with newer OS/browser combinations, they would not have this problem.
Pay now, or pay much much more later. Tech debt is a huge problem in IT, and ignoring it won't make it go away.
Also, is it 100% true that they dont update software on actuve systems mid mission?
What if they discover a critical bug?
What they don't do is allow updates to happen automatically. They're carefully pre-tested, and the update is carefully planned and scheduled. Their goal, like most enterprises, is to minimize risk, and bugs that you know about and understand the impact of are much less risky than updated software you haven't been able to test yet.
The Universe auto updates. So you need to update your rover when that happens.
The "risk adverse" nature, I can't belive I just said that about a bank, will become a problem.
There should be very little risk, but the fact is, there is actually a great deal of risk. With my professional web developer hat on, the "evergreen" browsers are the best advertisement in the history of computing for why organisations that need reliability in their IT systems should be wary of automated updates controlled by outside parties.
Browser updates break stuff all the time. And not just obscure things, though there are plenty of those. On the small scale, I've seen Chrome updates break basic page layout, and rendering styles as simple as rounded corners or shadow effects. On the larger scale, Chrome sometimes removes entire chunks of functionality, like support for important plugins. New features are often the worst, and it's particularly insulting to be told we should all use HTML5 feature X or JS feature Y instead of plugins, when the reality is that the new version still isn't up to doing what the plugin did a decade ago.
Businesses don't care that it's more convenient for the browser developers if everyone rearranges their entire work schedules to keep up with the latest "living standards". Businesses just want software that works, and having found it, they will go to extraordinary lengths to continue using it rather than playing the upgrade lottery. And given the track records of most of the upgraders in this game, frankly, it's hard to blame them.
You might be surprised by the number of subtle bugs lurking in corners left out as "implementation detail" by standards.
I've learnt to treat browsers as capricious genies that stick to the letter of the standards, but inevitably screw you over by inconsistently doing the opposite of what a sane person would have regarded as implicit.
Take a look at page 45 (page 27 of the PDF itself): https://www.ffiec.gov/pdf/cybersecurity/FFIEC_CAT_CS_Maturit...
Sure it has. When software existed on physical media and there was a real, material cost with distribution of software, there was absolutely a done state. Waterfall software development and actual requirement control would clearly define done states. That's not to say that there weren't bug fixes and updates, but it was (and is) possible to define what an application should do, code it to do so, test that it works, and ship it. But the world has changed, software has become easier to develop and distribute, and software development methodologies have shifted to where we don't have to have a done state in the same sense that we used to. I don't think it's a coincidence that we got agile development after the Internet.
The build that ends up on the disk is not "completely broken or even mostly empty"; it is what the development team and publisher were satisfied shipping as the finished game 2-3 months prior. Any software, and games moreso, can be improved with more time so the intervening time to release day is often taken up with development of a launch day patch to fix bugs that were triaged to be ok to ship with, and/or to add features/systems/content that just missed the cut before.
Launch day patch sizes is more of a symptom of the build pipeline (maybe there is some non-determinism that is introducing larger diffs than are otherwise necessary) or late optimizations (to build packing, asset management, etc) that can force you to download the majority of the game again.
...Well, then. Would you care to explain Assassin's Creed Unity, and Tony Hawk's Pro Skater 5? I suppose you classify them as "not completely broken"?
Tony Hawk Pro Skater 5 was a rushed project and generally poor game. It was buggy but that was the least of its problems. Many people considered it the worst, or at least one of the worst, major releases of that year. I don't think it is representative of contemporary games as your comment would imply.
I can understand that it's "bad" that their story was lacking at release, but isn't it awesome that they'd go to such lengths to serve fans and buyers and have a good product in the end, if not in the beginning?
To me this is not that uncommon. Let me tell you a story (I cannot provide a quotable evidence for a central point of the story except for multiple friends who independently had the same feeling after seeing Harry Potter and the Chamber of Secrets two times in few weeks when it came out in cinema in Germany).
At that time when this movie came out in Germany there was a law that "FSK 12" movies must only be watched by people who are at least 12 years old. The problem was: Harry Potter and the Chamber of Secrets was rated "FSK 12", while Warner would have liked it to be "FSK 6". So they cut some scene parts out of the movie to get this rating for the German cinema release (BTW: this incident lead to a change in the laws that now children under 12 years old may also go to "FSK 12" movies as long as they are accompanied by a parent). So far this is a story where for every claim I could provide strong evidence.
Now the strange part comes: (#) People who saw this movie shortly after release day and a few weeks after (in Germany) all could point out to some scenes (and they were the same) that they clearly remembered when they first saw the movie, but not the second time. So it seems to be that there were some cut alterings done afterwards and there were actually two different versions to be seen in German cinemas. I personally assume that this had to do something with the stated age restrictions (they forgot to cut some scene parts when they initially released the movie in the cinemas, so they hastily changed things afterwards and released a new version to the cinemas).
As I said: I can provide no independently verifiable evidence that (#) happened - but I deeply trust the independent persons that claimed so and I deeply trust their visual memory (I would only trust people who have a really good visual memory for such strong claims).
TLDR: changing a movie shortly after release is nothing that deeply surprises me.
I worked on a team that maintained a data portal website used by major pharma companies and clinical research organizations. It had to be IE7-proof for a very long time because many of our largest clients were still running very old browsers internally because of IT rules. Our R&D team would have loved to drop support for these antiquated browsers, but, shockingly, our sales and operations teams were not willing to tell multi-million dollar clients that they better upgrade their infrastructure or they couldn't use our software.
Happily we did convince them to get some hardware and software upgrades, so by the end of the project they were able to see their website the way most of their customers would. The proxy continues to be a nuisance; their developer can't access the cloud server where their production website is deployed; he has to rely on me to make any changes that are needed there.
>That has never been true.
That opinion is the exact reason we constantly have jr. dev's re-inventing the wheel with their great new idea. Nothing frustrates me more than someone thinking they should re-write applications just for the sake of re-writing it. If the software is feature complete, and lacks bugs, WHY WOULD YOU RE-WRITE IT?? To frustrate end-users with a new interface they have to learn? Because you're bored and want to deploy language of the month?
Your two options are ridiclous and show a very naive view of the world of software. There are literally pieces of code at likely every one of the fortune 100 that was written in the 80s for a specific task, still does that task flawlessly, and has absolutely no reason for anyone to touch it. That's a GOOD THING.
Counter-argument: software does exist in a 'done' state, or rather snapshots of it via releases. When version 1.2.1 is done, it is done.
Version numbers are a sane way to manage things for sys-admins and software engineers (think compatibility matrices and SemVer). I'm going to guess you are closer to the engineer side of that spectrum: can you imagine the insanity of having to maintain compatibility with a library that is self-updating and has an 'evergreen' API? The ability to pin versions is a God-send, falling too many versions behind is abusing that ability.
Some software does actually reach a "done" state, where there is nothing left to do, assuming no major platform changes.
wc will probably work as intended until UNIX itself shambles off the coil at this point.
But you could pull in wc from V7 on modern computers, and it would still work. (assuming GCC can still compile K&R C).
As an aside, I'm pretty sure K&R C is a subset of ANSI C, so any ANSI C compiler should be able to compile K&R C.
Not so, AFAICT. For one thing, K&R function declarations looked like this:
int main(argc, argv)
int argc, char **argv
Also, in K&R, function prototypes didn't list args.
11:44:57 cory@tizona /home/cory/Workspace/test
$ cat knr.c
int z = foo(3, 4);
printf("foo is %d\n", z);
return x + y;
11:44:59 cory@tizona /home/cory/Workspace/test
$ gcc -std=c89 -pedantic -o knr knr.c
11:45:07 cory@tizona /home/cory/Workspace/test
foo is 7
11:45:11 cory@tizona /home/cory/Workspace/test
$ gcc -v
Reading specs from /usr/lib64/gcc/x86_64-slackware-linux/5.3.0/specs
Configured with: ../gcc-5.3.0/configure [...]
Thread model: posix
gcc version 5.3.0 (GCC)
All true, but in a larger organisation with all the overheads and co-ordination that entails, you're looking for a pace of change measured in years, not weeks. You have better things to do than constantly redoing work you already did because software changes, and IT systems that aren't set up accordingly are simply failing to meet one of the most basic business requirements for being useful.
For example Flash: currently the Chrome audio APIs do not cover all of the functionality provided by Flash. You cannot, for example, pause the recording of audio.
If you have a product which requires that functionality then you're stuck. You can't upgrade.
In the recent past I was in just that position: we had no choice but to block Chrome upgrades, at least until such a time as we could implement the functionality ourselves.
It would be much better if Chrome would just leave the functionality in there; or maybe provide a special "Enterprise" build which includes all of the features behind feature flags. All they need to do is make that configurable by Group Policies and you've made Enterprises happy (and we can keep upgrading the core browser).
You can use the Web Audio API (getusermedia) and set an audio pipeline filter that stops sending data while a boolean flag is set.
I don't know if you mean a different API entirely, and I don't mean to disagree---just FYI :)
It's just the age old resources problem: not only do you have to change the frontend, you also need to make changes on the server. That more than doubles the work (quadruples in our case because the server requires far more QA effort).
That comes at the cost of other features, features which are more valuable.
And the partial rollouts, with teams being split into and stuck- and demanding you hack together tooling on the fly to allow for work to continue anyway.
And the horrible cross-application databases, which would went inconsistent if you didn't include basically a reduced data version controlling in every of your applications.
And those proto-typed tools, made by some intern in obscure languages, that would in secrecy fester into the "Main-Tool" of a department. And all hell would break loose, if some virtual environment update would break those env-dependcies.
You know what- it was bad back then. Real bad. And today is better. And even nostalgia cant save it.
We threw all the good stuff with desktop out at the same time as we moved to intranets.
You can still have centralized deployment, thin clients etc, but have the niceties of proper native apps (good multi screen/multi window support, better, good support for complex interactions like shortcuts, ctrl-clicks etc).
Even on mobile where native has fewer benefits than on desktop we can't make web based apps that are better than native ones.
A clean understandable web UI is good for a seldom used app, but a lot of the intranet apps are "constant use" and were faster to use in their legacy dos implementation than the newer web version. Browser hell just makes the problem worse.
This. Click, wait, click, wait, click, wait is in no way better than how quickly you can fly with a well memorized set of keyboard shortcuts in a native app.
This is all about attention to detail, which internal corporate apps almost always lack.
There's no problem implementing it. The designer should just think about it.
Totally do-able on the web. It's often the web apps architecture that prevents a good UX, not the browser's capabilities (which are really good now).
> Even on mobile […] we can't make web based apps that are better than native ones.
That's an issue of the developer, not the browser itself. I do admit that Safari/iOS is really hurting here but you can work around that if you really want to (it will still be a sub-par UX, though).
As bad as a real language VB6 and it's GUI builder were, it did allow even amateurs to write a decent working crud app in minimal time.
You mean where everything runs on the UI thread? Gotta pass on that. :)
You can't be sure anything is happening on a locked desktop app either though, so at least you can close it gracefully if it's doing that on another thread
IME customers close stuff if Windows shows "Not Responding" but not if you just have a fake progress bar.
Edit: Did you mean the browser itself could freeze? That's a thing of the past with Firefox being the last browser to separate the content and chrome process. So the web page may freeze, the browser window itself shouldn't.
Regarding other APIs, a great number has been available since many years, and any newly introduced API since at least 5 years is either non-blocking or async.
If you just need a form, just build a simple POSTing <form> without any JS. It will stand the test of time. A simple form from 1996 will still work today. A livescript enabled form from 1996 however almost certainly will not.
That said, there's a middle-ground we should strike here. At some point, being out of date (sometimes as much as half a decade or more!) on your browser trumps the cost of keeping internal software up to date. This is not to mention the network costs of third-parties attempting to support your users browsing the public web and on browser vendors that take investment away from their latest versions to continue to support legacy software.Allowing indefinite suspension of upgrades by IT is definitively a mistake. It adds insult to injury that Google is repeating a mistake which we all learned so much about via IE and older versions of Firefox. We will all collectively pay for this if they don't course correct.
The middle ground here is providing extended support channels like what's being done for Firefox and Edge. Chrome already has Canary, Dev/Beta, and Stable channels. A slower moving, more stable channel for businesses would be a natural solution here.
This gives businesses time to test and adapt, limits the total number of versions in the wild that must be supported by browser vendors and web developers, and provides just enough paternalistic motivation to keep your internal software in a good state (upgrading to support latest browsers is a forcing function for testing, performance tuning, security tightening, etc.).
So why don't they have two browsers - one for internal use only (IE6), which is firewalled to only internal network shares and another browser (Chrome, whatever) which can only go to the real internet?
At least that way you'll be browsing the wild internet safely.
I definitely do not claim they were right, but one of my previous employer's (not a software company) IT department insisted that additional applications in employee's computers were an extra workload for them to maintain. (Occasionally also security was used as an excuse). So the text editor that was allowed was notepad and the browser was IE.
Using ancient versions of IE to get work done was awful and in the current security environment, I don't think it's safe for a browser that's not in its own VM. Heck, you could use Chrome Remote Desktop!
Most companies I know of (customers) let browsers auto upgrade now or they vet upgrades and they get pushed to client machines in a few days.
What plumbing might prevent a browser update? I can't think of a good use case.
My work has increasingly become creating software for enterprise clients. They all want it to be delivered through the browser. Over time, chrome has become the primary target because its simply unfeasible to target IE. Organisations have bizarre and arcane rules for who gets which versions of IE, so it inevitably leads to developing for the lowest common denominator, which in turn leads to disgruntled clients who see some fancy feature on the web and can't understand why we can't give them the same thing in the time and budget they've allocated. Now, we generally say we'll target chrome and provide "some" support for IE.
But I absolutely guarantee that an orgs IT dept will seize this opportunity to convolute and complicate who gets which versions of Chrome.
And its all for nought. In my experience, IT departments cite vague security requirements, but when you scratch the surface there's typically no security at all. For example, at one company who really does need strong security policies, it was common knowledge that you could circumvent their stupid auth process by opening the task manager and quitting the process. You could also open up the developer tools console and send whatever requests you wanted to any service because features were enabled and disabled in the UI only.
My point is, IT departments in my experience only talk about security and use it as an excuse to do bizarre stuff that looks smart, but is generally the opposite.
1. The conference room where I was working had two ethernet ports; one went through the proxy, and one had a clear line to the internet. Apparently this was a common setup in the offices.
2. Everyone was required to use IE, and to use the proxy-managed ethernet port. But I develop using Firefox and I didn't get blocked by the proxy, regardless of which port I used. It turned out that the proxy had to be set up on the browser. The developers were granted admin rights on their machines so they could manage the proxy settings for IE but no one else could. However most people had the ability to install Firefox or Chrome, which would give them proxy-free access to the internet. They just had to know they could do it, and then not get caught doing it.
Turns out this was all about control rather than security; they had a real problem with people spending time on websites they shouldn't have been on, during time they should have been working.
The vast majority of large companies will control and test key software updates, balancing the various risks (security patches, obsolescence, operational incidents..).
Not allowing auto-updating to be controlled basically means that the software is not intended to be deployed in enterprises.
3 or 5 years might be more suitable.
It is a fallacy to believe that you can stave off updating a modern internet connected browser indefinitely; the IE6/7/8 hell has driven that lesson home in the industry, and security updates are a necessity. If you do need to stay with one particular browser version, then you use a virtual machine or some other properly sandboxed environment to offer it. You can keep running IE6 on Windows XP for as long as you like with a VM that thinks its the year 2003 and which for some reason can only reach http://oldunmaintainedapp.legacy.intranet.example.com. That's fine too (although I would hate to maintain that solution).
So contrary. Guerilla IT. You push for auto update so that they get a motivation to clean shit.
Absolutely. I still have websites that I designed with ie6 in mind that work just fine. But if modern design trends are any indication, it just won't happen. Designers want too many of the new flashy features.
You want to turn off things under Privacy like "Use a web service to help resolve navigation errors", "Use a prediction service to help complete searches and URLs typed in the address bar", "Automatically report details of possible security incidents to Google", "Use a web service to help resolve spelling errors", and "Automatically send usage statistics and crash reports to Google." And not login into Chrome.
The problem is that even if you tell all your employees what to disable, some of them will probably not comply. So a truly paranoid IT team might want to make their own fork of Chromium with those features removed.
At the very least, they are modifiable to not do that.
Also, let me point you to the Legacy Browser Support extension.
With proper GPOs you can force a domain/subdomain to open in IE directly from any links.
Haven't used it for years but back when I was a Windows sysadmin it was the final proof that FF was better than IE: in addition to being Firefox it could also be IE : )
I would like to run 3-4 browsers at all times, each with totally sandboxed identity and IP address.
However I do not necessarily want to run a full blown VM for each of them.
This is very simple and efficient for sandboxing server daemons - I do it all the time and there is almost zero overhead involved in chroot/jail. However I have never chrooted a GUI application and cannot find any instances/recipes of anyone chrooting a browser ...
It appears that this does the things I am looking for - however, I am suspicious - why do we need a new project like this rather than a simple recipe for the existing jail or chroot system calls ?
What is it that makes something like firejail necessary ?
At some point, developers are going to target their stylesheets for WebKit only, because Firefox rendering differences are going to be seen as nuisances that aren't worth overcoming in order to reach a tiny minority of users. Firefox will have to work toward WebKit compatibility as Microsoft Edge does.
WebKit is doing pretty well for something that was originally part of KDE.
Although some code is still being ported between them (e.g. Safari recently "Enabled support for a modern CSS parser, ported from Blink, that improves performance, specification compliance, and compatibility with other browsers, while also adding support for scientific notation in all CSS numbers" https://webkit.org/blog/7120/release-notes-for-safari-techno...).
In the last 30 days for the branded sites we have roughly these percentages of sessions by browser:
We will continue to support the same browsers for the next two to three years and then adjust as needed.
In reality, each mobile device and consumer computer has a different Webkit version.
There's now Blink, WebKit, Gecko, Trident in mainstream and a few others lurking.
Most of them have equivalent standardized features, or are in a state where you really shouldn't be using them. Don't expect every webkit prefix to work in Firefox.
Personally I have tried to like it, multiple times but I always get annoyed and go back to FF. But then again I prefer Linux over Mac and Netbeans over IntelliJ so maybe it's just me.
You are freely executing untrusted code from unknown parties, coming over insecure and unencrypted channels. You really can't be sure who is sending HTTP.
And don't talk about sandboxes. There is a sandbox escape fix in every version of Chrome. This isn't on google, there are way more attackers then fixers.
Basically webbrowsers are under constant concerted attack by every single bad actor out there. And you trust them to sync and secure your passwords?
You have more faith in humanity then I.
Later versions of Chrome, IIRC, will trigger a UAC prompt on Windows before displaying passwords, or something similar.
It's also generally been trivial for software to mine saved passwords from all browsers. I'm not fond of password managers personally, I prefer outright memorization, but password managers generally at least try to keep their contents secure, usually.
The biggest issue is the experimental features webmasters use and then force rest to use Chrome. I'm not so sure Chrome themselves not caring about standards.
Does this clarify?
But it's true, interestingly Safari is often holding them back because so many devs use Macbooks and Safari.
Chrome is still dominant but way less so when you filter down to only OS X users. This is especially true if you filter down to only the US where Macs are more popular than worldwide.
Unfortunately it seems like both that site and StatCounter require a payment to view browser percentage by platform. This info is out there for free somewhere.
For example, yesterday I was implementing a server that uses http multipart to stream images to a browser. multipart uses two methods to frame messages: a length prefix and a boundary token. So when you send a message you prefix it with something like
If I had only tested in chrome I probably wouldn't have noticed that my code had a problem that was breaking other browsers.
Apologies for creating the misunderstanding.
It is slowly starting too look like in the early 00's, most developers are starting to support only chrome, with no testing in Firefox or latest IE.
On Android it is even worse, no one is testing with Firefox Mobile (Fennec) - this is WinXP-IE6 all over again.
Also they're starting to phase out -webkit tags for experimental chrome only features. They're hidden behind flags so you basically can't use them in production unless it's endorsed as part of the spec.
Not to derail but safari mobile is the real IE6, plus lock in so users can't even get away from it.
It's going to be harder to unseat Chrome. IE 6 wasn't so difficult due to an increase in competition who offered features that IE wasn't even working on (standards, tabs, etc). How would you even unseat Chrome?
I miss the "DOM" tab, that Firebug for Firefox had features had, and I haven't found anywhere else. You can browse the DOM tree state and edit - similar to Smalltalk and Windows Registry - very useful & powerful. Sadly, they just killed Firebug, broke the addon, deactivated it with an update and replaced it with a still not feature-complete (at the moment sub-par) solution.
It's in the Fx DevTools sinve v48 . You may need to enable it though in the DevTools settings.
> Sadly, they just killed Firebug, broke the addon, deactivated it with an update and replaced it with a still not feature-complete (at the moment sub-par) solution.
Who's they? The Firebug Team works with the DevTools team since forever to port all functionality to the internal tools. The Firebug Team decided it was not worth to maintain the standalone version beyond v2 (also related to architectural changes to Fx, see project e10s) since they almost achieved feature parity by now and are working on the last remaining pieces . Firebug v3 is Fx DevTools + a lot of extra features that Firebug never had. If you miss a feature in the DevTools, try Fx DevEdition .
But my guess is that Google will become more and more monopolistic, more and more close in disguise. And closer to the big companies interest with DRM and such.
And if Mozilla keeps being Mozilla, they will be a natural alternative.
Nah, people have lots of memory.
I honestly don't know (but...then that's not such a bad thing, right?).
- debugging with source maps: it doesn't work properly (I can set breakpoints and debug variables content on typescript files in Chrome, but not in FF)
- JS exceptions: clicking on the filename inside the stack opens a new window with the source of the JS file, not the debugger (ugh)
- doesn't support typescript highlighting
I won't directly mention the project that's best placed to do that, as it's not yet ready for primetime, but let's just say Chrome may not have the performance crown in the next 5 years.
Also, aside from on the Mac, what's holding back Firefox? I accept that the Mac version seems to have more problems than the Windows or Linux versions (I have no technical explanation for this, it's just what I've heard anecdotally), but on Windows it seems to be a solid choice. Is it just about momentum for Chrome rather than superiority at this point?
(And don't get me started on services like XMarks. They're miles behind.)
The other thing is that major libraries (CSS, JS: jQuery, bootstrap, React) make sure they work on all browsers so you don't have to worry about that part.
The winXP-IE6 was hell because that never happened, you always had to changes to make it work on IE6 and some other changes to support IE7 and so on.
I did not check on mobile though.
Known issue and I'm yet to see a fix (other than "mbasic" which you have to use for messenger anyway).
Almost everything else seems to be fixed with uBlock Origin and/or Reading Mode.
most web problems are now bugs either by the site developer or in a specific browser.
thats a long way from ie6 problems where either you needed a unique site configuration for every browser or supported ie6 only.
Certainly you realize this kind of cycle has been going on for longer than that.
In browsers, yeah, but that's only because browsers have only existed that long.
This is pretty much the way of all software cycles since the dawn of computers. No area of software that i'm aware of has really broken out of this kind of cyclic behavior.
I'm always surprised why this is the case. Firefox Mobile + UBlock Origin works really well and is only very slightly slower than Chrome, while also blocking ads
It’s not that sites support Firefox, it’s that Firefox tries to implement 1:1 what Chrome does. At far less than a tenth of the budget.
Probably doesn't help that google plaster "get google chrome" over all of their services.
Read: Google is the internet. Google is whom you should trust.
What I don't understand about Firefox is that Mozilla have a lot of money but progress seems very slow. Sure they get there in the end (or most of the way) but it just seems to take forever. Why?
Money isn't necessarily the issue for Mozilla, they managed to create a decent browser with a fraction of the big player's budget. As always, it's the triangle of complexity and time and manpower, they just can't fight on all fronts at once. At the moment, they are trying to get rid of a lot of legacy that prevented them from improving some aspects of the system (see project e10s, WebExtensions, etc.) all while preserving as much of the ecosystem as possible – which is very hard due to the current extension system that was created more than 10 years ago and just doesn't fit the bill anymore. E.g. e10s is already going on for 5 years or so and it just started rolling out a few months ago.
Good news is that with these projects completing (improving overall responsiveness), there's a huge new project starting (Project Quantum) that aims to overhaul major parts of the rendering engine. This will integrate a lot of the research and investment poured into Rust and Servo, which will probably outrun current browsers by an order of magnitude (at least in some aspects that has already been shown, see WebRender).
As opposed to Google, Apple and Microsoft?
Consider what happened to Opera. I think you vastly underestimate the complexity of modern browsers. By orders of magnitude.
while i also hate that chrome has such a big influence, if all of them implement the common specs, i would be ok with that.
When someone calls me ignorant I don't necessarily feel like replying.
Our opinions are based on life experience, watching the same cycles unfold time and time again.
The next one will be the demise of open source ideals after everyone has migrated away from GPL for everything but the Linux kernel.
I will give it around 10 years for it to happen.
What about updating the BSD distribution on the PS4?
Firefox Mobile does not meet that rule for us. I know personally a single person who uses it, and that's only because he's fiercely anti-Google.
There's just little sense testing against something no one is using, lest we want to test NetPositive .
Maybe because (almost) no one uses it? It's definitely a vicious cycle (If there are no users, there won't be any testing on that platform. If there is no testing on that platform, there are no users.).
Developer time/resources are not unlimited. It's not worth their while to target an obscure platform.
This is logical. If you have limited developer resources, it makes sense to target the most popular combos first. Until we have true "write once run anywhere" it will always be this way.
PS - I miss Presto-based Opera.
I still encounter companies downgrading to older version of IE to support their legacy applications.
I see other pages talking about it going back to 2010/2011
Because admins have to answer to the auditors(and our conscience) when they ask the question "How do you ensure that personal and confidential information is not leaked from your environment"
Which is why Windows 10 is my worst nightmare.
But it still installs the entire Windows store with Candy Crush Saga, Facebook, Minecraft, XBOX, etc. and they make it nearly impossible to remove. Why on earth they would force that crap down the throat of their ENTERPRISE customers is beyond me.
I fought with it for months and finally gave up and cancelled the deployment project until MS offers a better way for enterprises to control the Window Store.
Update: Added more below
IE: When trying to build a secure environment, you must eliminate all unnecessary attack surfaces. The customer did not need "XboxIdentityProvider" installed their Windows 10 Enterprise environment. But this is something MS feels must be installed on all Windows 10 PCs thus they made it nearly impossible to remove.
And I'm not talking about controlling employees here, but simply enforcing settings for compatibility with legacy systems, legal compliance, etc, the list is endless.
Out of date web browser bugs are pretty much THE mainstream hacking route. It is the route of least resistance.
I guess for the TCO for Digital Signage devices is cheaper than the incumbents (cost of Android signage app/web app vs Windows app/web app + remote management control).