I lead the add-ons team at Mozilla. Add-on compatibility was definitely a big consideration in our plans to switch to a faster release cycle, and we're working on proposing some changes to the way it works. As you mention, the current way of doing things won't work 4 times a year.
These changes will probably include automatic compatibility with certain major versions. We'll have more to share soon on the add-ons blog.
revisiting the submission process itself is welcome and seemingly necessary; but i'd be willing to deal with an inefficient submission process if the upgrade was well documented.
Maybe it's time for Mozilla to move to using minVersion only, like Chrome does.
Chrome's numbers are not in the same scale as FF and IE too, of course. Users can't tell the difference between Chrome 6 and 7 like one can distinguish FF 2 and 3, or IE 6 and 7. I wonder, though, if Google was just trying to catch IE in version number with Chrome and once it's at 10, they'll stop increasing it so rapidly.
It seems like the Firefox 4 project simply got out of control, there was no clear cut-off, features were added willy-nilly throughout the beta process, and the browser (for a while) seemed stuck in perpetual development. It doesn't look good, and doesn't make Mozilla look like they can ship without the temptation to add more or tweak more.
Fx1.5 and 2 both had two beta releases. Fx3, 3.5, and 3.6 had five beta releases each. Fx4 is scheduled for at least twelve betas.
They need a significant change in their project management, and setting a fast-paced roadmap is probably an aspirational way to force themselves to ship.
Netscape all over again?
It almost seems reasonable to think any browser level 7 should have full HTML5 compatibility. Level 8 should have IndexDB as the built-in database.
I haven't spent a lot of time considering this idea, but I'm attracted to the stability that could be gained from having versions somewhat mapped to implementation standards.
Anthony Laforge, the Google program manager in charge of Chrome releases, was kind enough to give a talk at Mozilla's all-hands meeting in December. He gave a version of this presentation: http://techcrunch.com/2011/01/11/google-chrome-release-cycle...
Each Chrome release is in development for 12 weeks. There are always two 12-week cycles in progress at once, staggered by 6 weeks. So users on the stable channel see a new version every 6 weeks.
This has benefits much more profound than just a large number before the dot. For example, developers working on new features actually feel less schedule pressure. If a feature misses the deadline for version N, the team knows they'll have another chance just six weeks later for version N+1. And for users, the browser becomes more like a web app. Do you know what version of GMail you use? Do you notice every time they add fix bugs or improve performance on Google Maps? No - every time you open the app, you simply have the latest version.
Laforge mentioned some things about Chrome development that make this work:
- Silent background updates. When an update is available on your channel, Chrome downloads it in the background and installs it. The next time you launch the browser, it is running the new version and deletes the old one. Because of this, the majority of Chrome users are up-to-date within days of each release.
- Feature switches. Development is done on trunk, but each work-in-progress feature can be disabled with a switch (either a preference, a build option, or a command-line flag). This lets developers work for several cycles before turning a feature on for the stable channel. It also lets product management disable a new feature if bugs are found during beta, without changing any code. Web startup folks might recognize this as the "always ship trunk" pattern used by sites like Flickr: http://news.ycombinator.com/item?id=1463751
- No support for old versions. Instead of deciding which "version" to install, Chrome users decide which "channel" to follow. If you are a developer and want to test upcoming changes, use the canary or dev channels. If you want a more tested browser, use the beta or stable channels. New features roll out to each channel in turn. If you are on one of these four channels, you get updates. If you are not, then you don't. (Just like you can't choose to run an old version of GMail or Google Reader.)
Mozilla is just starting to talk about the new roadmap and how to achieve it. Chrome's experience is useful, and it was really helpful of Laforge to share it with us. But we're different both technically and organizationally from Chrome, so just blindly copying what they do would not be a good idea. The wiki shows a plan for releases every 3 months (not every 1.5 months like Chrome). We use Mercurial with long-lived project branches (unlike Chrome which uses Subversion and does all development on trunk). And our product differs in some important ways, like the ABI for binary extensions. (Even non-binary Firefox extensions may reach deeper into the browser internals than Chrome extensions.)
But we do think that the benefits for users (more frequent improvements to the browser) and developers (more predictable schedule, fewer release delays) are worth pursuing. There are benefits for web developers too, since it reduces the time from when we start implementing new web-facing features (like IndexedDB, or CSS3 transitions) to when you can deploy them to more of your users.
I don't appreciate this because I make money from software. To me, a new major version number is:
a) an indication that there have been major changes. It's not something that increments every 12 weeks.
b) something most people reasonably expect to pay for.
I know Mozilla isn't first with this approach. It still sucks.
Edit: Interesting that it has been driven by the same company that rendered the term 'beta' meaningless.
Edit 2: Why the downvotes?
Web apps are clearly going to break this version treadmill. Even if you're downloading a package to the desktop, modeling software more as a service - you pay for the right to use software for a period including all updates, rather than for a perpetual license that in reality will expire and need to be re-purchased as soon as the next major version hits.
Pay as you go is better for both users and developers. It means you can charge less up front, spend less on marketing (since the cost of trialing the software drops) and focus on delivering the _best_ experience to all your users rather than denying some features to existing customers in order to create a future revenue event.
"...the maker hopes will squeeze you into opening your wallet, even if you're mostly happy with the last release."
I don't understand, what is the problem with that? They can hope all they want, if you're happy with the last release you don't need to update in the "old fashioned" system, it's yours forever. In the "new" subscription system you could be perfectly happy with the product as it is but you are forced to keep paying for updates just to use it.
The idea of the "complete overhaul" is something engineers love to dream about, business managers hate to do and end users generally just don't care that much about. That's why they happen so seldom - and usually only if the existing package is a pile of unmaintainable crap.
The shrink-wrap model encourages chasing adding more bullet points to the outside of the new box over actually improving the day-to-day user experience. The service model encourages making your actual customers happy with their product AFTER the purchase, so they keep buying.
The whole concept of a large, up-front fee is driven by traditional mass-media marketing strategy: spend a bunch of money making something sound really great to buy, sell it for a bunch of money up front so you can get a return on your advertising spend. Don't worry about what happens after that.
"I don't understand, what is the problem with that?"
The problem with that is if there is some small feature change that would really improve the current product (say supporting an additional new file format on import) there is little reason for the maker to add it to the old release. Instead, they lump it in with the new version and hope it forces you to buy a whole new license. So a feature that might only add a small marginal cost but would make current users happy for longer doesn't get released to them.
Anything that is constantly updated with anything more than bug-fixes will become "a pile of unmaintainable crap", or just obsolete if the core isn't changed. Core changes normally happen before that point though. That is if business managers' incentives aren't all wrong. An example of that fairly recently was twitteriffic for iOS, they felt it was heading towards being a pile of crap so they had to stop and go back to the core.(even though it meant unhappy current customers)
Users not caring about it unless it immediately comes with new features or at least all the old features is exactly my point! And if what the user is paying for is updates they are absolutely not going to accept losing any features(and not going to be happy with not updating since that is the whole reason that they are paying in your method). If the user owns a version and sees that the next version doesn't have features they need they'll stick with the old version until the new version has those features. In your model they have already paid for that update.
"So a feature that might only add a small marginal cost but would make current users happy"
Prices aren't just based on costs, they are usually(in the IT sector) based on how much more they "make users happy", in some cases they are almost entirely based on that, clothes, shoes and apple products being the most obvious examples. Which to repeat in another way is the problem with core updates in your system, they are high "cost" but they only prevent the programme from going to shit so they don't "make current users happy"(just prevent future unhappiness).
That's not to say that your model doesn't work in some cases, anti-virus programmes need constant updates but very seldom core changes.(although oddly enough at least for norton it's much cheaper to buy the newer version in amazon or a shop and get continued subscription that way then to update a subscription) And you mentioned GPS apps the same applies to them, no core changes needed.
Angry Birds on iOS is absolutely not a case of it though, you don't pay for new levels, new levels are free and are an example of the opposite of what you're saying happening, you pay upfront and they have continued adding levels way beyond a point where it would have been reasonable for them to make an Angry Birds 2 and putting the new levels in that. It is a special case but it is the traditional model just with a new attitude (on iOS, on android it's the ad revenue model which is different again).
I don't want a relationship with a company. For most apps I want to buy and own a thing as it is now.
This "old and broken" model persists because it's something that customers get instantly and can commit to knowing in advance what their full investment will be.
Also, iPhone and Android apps have the potential for ongoing in-app purchases by users. So you buy the GPS app once for a very low price (or free) but you have to keep buying the data updates if you're using it. Or new levels for Angry Birds.
I really hope they don't change any rendering components at the same pace, though. I already have to strike any contractual obligation to test/support Chrome in commercial web development projects, because it's a moving target and Google do release breaking changes. If Mozilla browsers go the same way, then I really will start coding for the nice, stable, standards-friendly world of IE -- and that's never a statement I thought I would write on a serious forum with a straight face!
That means server-side validation, which of course one needs to do anyway. But it's not to hard to imagine somebody making a site during the brief window when Chrome supported HTML5 form validation, and then discovering their site broken in Chrome a few months later.
I'm glad to hear it's turned back on; I hope the form-validation interface is as sleek and professional-looking as the one in Firefox 4.
Why not do fast development cycles that result in point releases? That, combined with background updates like Chrome does, would be great.
Since you asked... not every time, but often. You see, it's apparent from my experiences that Google makes incremental changes to the live system.
For example, when you say developers don't have to decide which version to install, but only which channel to follow, is that how they switch among channels - just restart the stable version using a different switch?
Regardless of your channel, you can turn on different features in Chrome by going to "about:flags" (or sometimes by passing command-line switches) - but only if those features are in the build you are running.
One reason is that while Chrome's UI may look much the same, its behaviour often changes significantly, and not always for the better.
They broke CSS3 rounded corners just as web designers were starting to use them instead of graphics.
They made a political decision to remove H.264 support just as the HTML5 video tag was starting to gain traction.
There is a reason technical communities develop standards. Unfortunately, the web development community has completely lost the plot in recent years. The W3C have become so slow and politicised that they have become irrelevant. The browser vendors are moving so fast that no-one can keep up.
No-one can actually use all of these state-of-the-art features in serious projects anyway, because they don't work in most browsers. As keen as web developers are for long-standing pain points to get fixed and to play with new toys, to get the job done you still have to use the old tried-and-tested techniques anyway as a fall-back for all the browsers that don't support the latest cool stuff. If you have to do that anyway, there isn't much point to using most of that cool stuff in the first place.
Basically, rounded corners weren't rendering smoothly, and if there was a border applied then you could even get the wrong colours showing, which looked awful.
If they're continually adding features and not breaking backwards compatibility on a "fixed" timeline, then the major version jumps that Firefox, IE do etc. seem more pointless than anything.
If I've got this application that we're continually developing on and we're using a 2 month lifecycle for development ... at what point does it make sense to jump a major version? Never? What's the point of it then?
It's the fact that we've become accustomed to major versions representing periods of time.
Version numbers should be representative. If it was the case that they used the minor number to represent their two month cycles, then they'd be at 1.11. What's the point of the 1 in that scenario? It's redundant.
The major version increments fit in better with their development lifecycle, so why not use it?
I myself am already running version 11 (screenshot: http://jhn.me/4RYO) via their nightly builds.
"Please don’t read too much into the pace of version number changes - they just mean we are moving through release cycles and we are geared up to get fresher releases into your hands!"
Now look back: the public stable release of Chrome 1 was on 11 December 2008. That's roughly 5 up per year. In another year it will be 15 and then 20. After a while it will start looking silly (like it did with MS Office) and I don't think the question is if they will stop with this aggressive version-numbering, but when.
They now have beaten IE to the release of version 9, they will beat Opera for version 11. I'm guessing once they have beaten all the other browsers they will stop and get back to point-releases like everyone else.
But not until they are ahead of the curve and say "Your browser only goes to 9 or 10? My browser goes to version 11!". In lack of better words: Pointless or not, I'm guessing they want their browser to be the one which goes to 11, Spinaltap style.
The number is all a mind-trick and you have to be a fool not to see the game being played here.
Google simple decided to go for big version numbers, because what does switching from 2.95 to 3.0 mean if you release new features constantly and major steps thus never happen?
Sticking to decimal point version numbers makes no sense in this case.
Until now, every version of Firefox had significant changes -- visual changes, changes to the way you interact with the browser.
It now sounds like Firefox will follow Chrome. It will keep the same UI, and simply add new features.
Which is fair enough, I guess, if the browser is to become the next platform (for Web apps). It's time for the browser to 'fade' into the background, to give Web apps focus; to empower the apps, rather than interfere with their operation -- much like any other OS!
We don't emphasize version numbers in our marketing because consumers are almost never aware of what version of the browser they're running other than "latest". Do you know what version of GMail you're using? Neither do I. With this in mind we chose a predictable versioning scheme even if it's somewhat unconventional in the world of desktop software.
The period of time required to put together a release was still sufficiently large that both engineering and management felt pressure to shoehorn stuff into releases. The cost of missing a release was waiting another quarter. For many engineers this means missing your quarterly objectives completely. The temptation to ship unpolished features was high, and the morale hit for missing a release was still tangible.
Moving to the interleaved cycle with 6-week updates is intended to break down these problems by making the consequences of missing a release less severe to both engineers and management. We are still in the early days of this approach and the team is getting used to the adjustment.
I'm pleased to see Firefox is going to be trying to improve the frequency of their releases. Staying fresh is critical to vitality in this modern browser landscape. When I worked on Firefox prior to 1.0 we would do releases roughly quarterly but the team size was much smaller. A high level of discipline in engineering and management is required to maintain regular releases with a large team size.
Of course, the biggest obstacle will be getting people to add support for the feature to their site.
Quite surprised to see this, judging from the fact that empirically the average human brain-hand response is around 200 ms. Only in competitive CS have i seen <100 ms response times. Sure, f.i. 20 ms instead of 50 ms network latency in a non-LAN setting makes a difference in that scenario, but only when you are not on the very top of the relevant Bell curve.
Seems like a moot bragging point. I'd love to hear from a more knowledgeable person what i'm missing.
Once latency becomes greater than 50ms, it becomes human noticeable -- aka "not instant."
Given that UI response isn't used to trigger human reflexes, you are just misunderstanding the intent.
Of course this is just my experience, scientific findings may very well dispute my ignorance.
I see it pretty regularly in playtests and such. Anyway, it's not something someone can simply say "Hey, this app took longer than 50ms to respond!" It's more of a gut feeling that something about it is slow.
They are likely aiming at the 50ms threshold because that's the point at which you really can't notice it, most of the time. At > 50ms, it becomes more and more noticeable, and by just 100ms people actively start complaining about the slowness of response.
As for the FPS gaming issue, the reality is that you don't notice it in FPS gaming because the games are built to hide it from you -- when you press your fire button, your gun fires, regardless of net latency. And that has been true for a very long time. I think the last game to really wait for server response before doing anything was Quake 3, and probably some of the Quake 3 engine games.
I could never stand Quake 3 for that reason, if you had anything higher than 30-40ms it started feeling like you weren't controlling the game.
The one mod that made great strides to fix the issues with the Q3 net-code was CPM (www.promode.org). 50-70ms to the server felt like 20-30ms in Vanilla Q3 and when you hit 20-30ms in CPM it felt like LAN play in VQ3. It fixed the niggling issues of lost packages (caused players to warp) as well. You even had some players that intentionally downloaded things in the background to cause dropped/delayed packets so that they would warp.
Eventually the net-code in CPM got so good the community even had cross-Atlantic competitions. The team I played in had 100-120 ping vs west-coast American teams on NYC servers and it was actually playable to the point where you could have fairly fair fights with them. Except against Team Abuse... :p. Man what a schooling in team-play and lock-downs of maps they gave my team.
That sits well with me. Upvoted! It is true that "clunkiness" is a valid complain, but i've always attributed it to a combination of all latency-inducing factors.
So the user experience delay is the sum of three different sub-delays. Firefox wants to improve it by reducing the middle part.
For example, of my two monitors, one is lagged by one frame. I can see this easily by taking a white window against a black desktop, moving it so that it straddles both monitors, and moving it rapidly up and down: the monitor that's a frame slower makes the window look a bit like it's made of rubber, that it's bent away from the direction of motion. And that's just a 17ms difference in two events that are supposed to be simultaneous, an order of magnitude less than your factoid of 200ms.
The polish focuses on improving interaction times for all user interactions. Their goal as previously mentioned is to get it down to 50ms from user action to visible response. They also say the have 50 common usage paths (identified from testing) that need to be improved.
This looks like a reasonable though ambitions plan for Mozilla and if they stick to it they will remain competitive and relevant.
You're kidding, right? Do they REALLY not support the fastest growing OS version + platform combo?
OEMs are finally paying attention to 64-bit computing, and Windows 7 has been replacing Vista like wildfire.... and FF is still two releases away from official Windows 7 64-bit support?
Oops. Seems stupid Aol/Weblogs/Switched/DowloadSquad (I don't even know what it's called any more!) didn't bother to fact-check, and they're referring to 64-bit native builds of Firefox.
So I'm really being downvoted because the original story got it wrong? I guess that's one way to take out your anger on inaccurate writeups...
Nope, they're just misinformed; they read the roadmap incorrectly. What the Firefox devs mean is a 64-bit version of Firefox (i.e. an x64 process). Right now Firefox runs perfectly on both Windows 7 and 64-bit versions of Windows.
The usual proviso apply of course: these are nightly builds and come with no guarentees, nor should they be used as an indication of the quality of the release going forwards.
Can you please provide a link to that, I'd like to test/use it.
There are 64-bit nightly builds of FF4, but _only_ nightlies:
This suggests that there won't be an official 64-bit build of Firefox 4 -- we'll have to wait for Firefox 5.
As some people comment below me, only some parts of it are working in the 64-bit space (and most notably, the Flash 64-bit plug-in is still beta/alpha.)
I've been using Chrome / Chromium exclusively since its release, primarily due to the simple, uncluttered UI and blazing speed. It was a breath of fresh air, and I switched readily.
But something changed with the recent Firefox 4 betas. The browser feels fast, again, and the UI seems more reasonable. I find Panorama useful. And slowly, Firefox is winning me back. In time, I think it may win back other developers, too. And if it does, I could easily see a fairly stable equilibrium developing between the two browsers.
Firefox needed the competition from Chrome, and in many ways, it's starting to meet that challenge.
Presumably, the course of Firefox will run a little less bloody. =)
Mozilla has been talking about faster iterations for a while now. 3.6 was, I think, supposed to be the start of that initiative; 3.5 was June 2009, and 3.6 was January 2010, even after slipping by a month or two.
You could argue that the platform version change from 1.9.1 to 1.9.2 isn't that big of a difference compared to the jump from 1.9.2 to 2.0, but you have to keep in mind that the decision to dub the-platform-behind-Firefox-4 as "Gecko 2" wasn't even made until over the summer.
This doesn't however mean that other browsers too have meaningless version numbers.
Meaning since people will learn that upgrading major versions will constantly break things, they will stop upgrading and consider other browsers like Chrome.
Do Chrome major version number changes break their plugins?
I don't understand this rush to double-digit version numbers.
Do they think because IE is at version 9 that newbie users will also want their other browser to be at version 9 ?