They aren't abandoning it, they're just optimizing for the use case. In fact, Silverlight developers should be happy - with some simple namespace changes, they demonstrated converting a silverlight program to a Metro UI on stage in the keynote. Sure your program may not run in the browser unless they swap to full desktop mode, but it will be easy for you to make it a proper installed application if you desire.
It's a good decision in my book, and will result in a better tablet experience. Frankly all the misinformation and hyperbole around this is getting exhausting. They've really not pulled the rug from under anyone at all.
That's not what the article says:
"The Metro-style browser is the full screen, chromeless implementation of Internet Explorer that most people are expected to use with Windows 8."
I have also read this elsewhere. Do you have a cite to back up your claim that Metro is tablet-only?
With relatively minor changes we'll be able to run our Silverlight or WPF XAML in Metro and _continue_ to be able to run it the desktop browser (via the Silverlight plugin on Windows or OS X) or out-of-browser as a desktop app (on Windows or OS X).
To think that this is a deprecation is the contortion.
You also cannot share assemblies directly, which makes code reuse a big pain, no word on whether WinRT will support normal assemblies, but it's runtime is stripped down just as Silverlights is so probably not.
With (reputed) quad-core and 2560x1600, there's sufficient power.
Their technologies are invariably doomed to obsolesence within about 2-3 years of their introduction. Those that the web doesn't make obsolete are eventually thrown under the bus by Microsoft themselves in about the same time frame.
First, despite your proclamation I've almost never felt left behind by Microsoft. Even as someone who has done WPF/Silverlight work, I actually find Windows8 and WinRT deeply comforting. It uses the languages I know. The main difference is it adds some new namespaces for me to use. I think people not familiar with WPF/SL think that those who know it now are left with a dead skill set, but the reality is that everything you know about it is still applicable.
There are few technologies that MS has deprecated that I haven't generally been in violent agreement with. In fact, sitting here I can't think of any. There are several that I think they should have never introduced, but I find it hard to be upset with a company that tries things, some of which I think aren't necessarily the best, and moves on.
I'm not sure if this "no plug-ins" idea is the best, from an end user perspective, but as a developer I'm perfectly happy with it. I've already started writing some Silverlight... I mean Xaml WinRT apps for Win8.
Our company's codebase is written in C#. Some domain specific language parsers were implemented using F# (with Fparsec) and the framework can be extended using IronPython. At some points, we have IronPython code being called from F# code built on a framework written in C#. And it works without specifically having to handle it.
Visual Studio's debugger can seamless step between code written in C# and F#.
There are lots of things I miss about Unix dev tools but the Microsoft technology stack is not an unreasonable choice to make.
As an outsider (the last time I developed on a Windows platform was 2001) I feel like I see different acronyms all the time in the Windows app-building world, each with a lifetime of about three years.
If I'm in a bookstore, the cover of the latest Windows related magazines always seems to be about some dramatically new way to do some relatively boring thing, and how you should abandon everything you're doing and use this stuff instead.
But perhaps it is not fair to judge the entire MS platform by these more frothy aspects.
Microsoft (like any other major software vendor) has popular products and not so popular products. The popular ones survive, the unpopular ones don't die off, they end up still being supported just not in such a way.
See MSFT's data access libaries, you ado, ado.net, entity framework, linq to sql and zeus knows what else now.
You also have COM+. While it's an overcomplicated mess, and not as popular as it once was, it's still supported.
I am a Linux guy. I love Python, Emacs and open-source and I hate working within the Microsoft developer ecosystem.
With that said, the .NET VM is light-years ahead of anything that anyone else is doing. And they're investing in dynamic languages and tooling for those languages. And they build stuff like F#, which is awesome.
Even C# rescues ex-Java people by giving them lambda expressions and local type inference. This is something that should be celebrated, like Java's garbage collection of yesteryear.
So yeah, they have a lot of political issues and their mainstream development community sucks, but make no mistake: Microsoft has some phenomenal technology.
It sure does, but Microsoft doesn't build technology to sell that technology/tools/programming ecosystem.
They build it so that users can buy their OS on which those apps can run. The more people build using their tools more OS licenses they can sell.In other words the technology is a bait through which users can be fished to buy their OS. This is a totally different thing when compared to Python/Perl other open source tools which have dedicated teams whose primary job is to build and share them to serve mutual interests.
When you find companies dropping a particular technology for something else, its just that they figure out a better bait to catch more fish.
And they would probably have a full development stack running on Macs and other Unixes (even Linux).
If the development tools division had not the burden of making products that sell Windows and Office licenses, they would be free to do whatever makes sense. Now they aren't
Well that's what its all about, Those programming tools aren't what MS's core business its all about. Their core business is all about Windows OS and Office software.
The reason they are feeling the heat now is the computing paradigm has changed very quickly for them to adopt. More and more computing is going to happen on the mobile devices and users will be looking at those devices to access network by some means and do rest of the stuff on the servers. Like search, games other apps.
Microsoft's core revenue comes from selling an operating system. They are not very comfortable with the idea that Software can be given away for free or a lesser prices just to sell hardware or pull traffic to your servers to sell ads.
Whereas the momentum is very rapidly shifting to later paradigm. That is why bing is so crucial, even if they have to run it under loss currently. They want some search backend, to be available when their mobile OS comes to full force in the market. Because then, they will fight a tough price war selling their OS, they will have to depend on the Google model.
Microsoft is no more a technology tools company (because of SQL server,C# etc) than google is because of BigTable, Go! etc.
Although they can sell them as their primary buisiness, but that's not what they are meant for.
But they selling SQL Server and C# et al and other programming tools has nothing to do with competing with Oracle or MySQL. Those are to ensure enough developers are trained to use them so that they can sell more OS licenses later.
As you say, C# developers are a lot happier than Java developers.
.NET VM is light-years ahead of anything that
anyone else is doing
I'm comparing it to the JVM here mostly because it is oranges to oranges. And as yet another orange, .NET leaves much to be desired.
they're investing in dynamic languages
And IronRuby could never match JRuby in completeness or performance. Maybe that's because the JRuby developers are awesome, I don't know -- but there's no denying that the Iron* languages are a half-ass effort.
Even C# rescues ex-Java people by giving them
lambda expressions and local type inference
Scala does have such things, and personally I'm happy that people finally discovered technology from the 80-ties.
I do like what they've done with expression trees and Linq2DB, but its performance is not something to brag about (and this is a PITA since the biggest reason why I still use less-dynamic-than-Python languages is for performance reasons).
Java's garbage collection of yesteryear
And they build stuff like F#, which is awesome
Or when some other guy invents a usable web framework that people actually like, and then all of a sudden people start making clones, Microsoft included (Rails).
I actually like both C# and F# btw.
It's still a bureaucratic language, just like Java, but I like the low-level access that it provides. P/Invoke is a lot better than Java JNI -- but that's the only thing that really bothers me about Java.
And F# is barely usable on top of Mono, speaking of which ...
I am a Linux guy
The fact that .NET is only usable on Windows has been its major flaw since for me ever since I heard about it. It might as well be the most awesome technology in the world (although that would be a stretch of imagination), but it's completely useless if I can't run it wherever I want.
> No it isn't. Instead I would say that the JVM, which took ages to release version 7 (and we'll probably wait another 2 years for version 8), is light-years ahead of .NET
> while the DLR is somewhere in limbo
> Btw, speaking of Java's garbage collector, it's so kick ass that no other VM comes close.
I didn't downvote, but saying stuff like that without a single bit of concrete information to back up the those arguments is pretty useless.
Then you have:
> Personally, I consider it a lot more awesome when some dude I never heard of appears from nowhere with a usable LISP implementation (Clojure).
Which disregards that non Microsoft employed people create languages for .NET too. Nemerle is a great example, and that's how the Iron projects initially started before MS got involved.
> The fact that .NET is only usable on Windows has been its major flaw since for me ever since I heard about it. It might as well be the most awesome technology in the world (although that would be a stretch of imagination), but it's completely useless if I can't run it wherever I want.
Which is just almost ignorant. I used to be employed writing ASP.NET web apps that ran on linux servers before I switched jobs. Then you have xamarin which lets you develop .NET apps for iOS and Android. Needless to say you get that ability automatically for Windows phones, etc. So all in all to me it seems like .NET's reach is pretty impressive.
> Personally, I consider it a lot more awesome when some dude I never heard of appears from nowhere with a usable LISP implementation (Clojure).
Which disregards that non Microsoft employed people create languages for .NET too. Nemerle is a great example, and that's how the Iron projects initially started before MS got involved
The Open Source community has never abandoned any technology? Really? For most people, most of the time, "you have the source code, maintain it yourself" is no more feasible than "start a company and buy that product from MS".
Their technologies are invariably doomed to obsolesence within about 2-3 years of their introduction
That, quite simply, is FUD.
It's an apples to oranges comparison. It doesn't matter if Silverlight is popular (it's not). Microsoft decided it would not continue developing and that's it.
OTOH, Oracle may try as hard as they want to terminate MySQL. That won't work. We already have Maria and Drizzle to take its place.
Second example: Zope v2. When Zope Corp decided to invest their resources on Zope v3, the community that relied on Zope v2 (people who maintain Plone, ERP5, Zenoss and others) continued to evolve Zope v2, sometimes backporting and bridging between it and Zope v3. Zope v2 is still alive and well and, IMHO, kicks ass.
The only way to kill an open-source project is not to use it. When nobody does, it's dead. And even then, with the source code available, it can be brought back to life as soon as someone wants it.
Right back at you.
On the one hand, you're saying that once MS stops actively developing a product, it's over. But on the other hand, you say that anybody can pick up any old open source software that they're interested in?
Why isn't the same logic true for the MS platform? I mean, my copy of the .Net4 installer is just fine. Even if MS decided that the .Net would no longer be developed, what's stopping me from using what's already out there?
Because they won't publish the source code of discontinued technologies so that others can continue their development.
> my copy of the .Net4 installer is just fine. Even if MS decided that the .Net would no longer be developed, what's stopping me from using what's already out there?
I see you confuse being available (as in "I kept a copy around") with being alive and actively developed. You can't continue to evolve Windows 95 by yourself. That's not true with each and every open-source product or project out there.
BTW, try running your .NET 1.0 installer on an ARM-based Windows 8 machine and tell me if it works.
Nor can most organizations continue to evolve open source project X.
try running your .NET 1.0 installer on an ARM-based Windows 8 machine and tell me if it works.
Same is true for OpenOffice or whatever.
The fact is that the vast majority of uses are not capable of contributing to open source code, let alone a complete, regression-tested port to another platform. For practical purposes, there is precious little difference between FOSS and proprietary software for most users: either what is currently available works for them, or it doesn't.
Can you name any project X you feel fits your hypothesis?
> The fact is that the vast majority of uses are not capable of contributing to open source code
At least they can pay someone capable to do it. Can you find someone capable of evolving Windows 95? No, you can't. Source-code availability is critical for this.
If your company, for instance, depends on PHP3 (or Perl 3, or Python 1 or something similar) you'll find readily available talent that can either port your application to a modern version of whatever it needs, or add whatever functionality you require to the ancient product you depend on.
> Can you name any project X you feel fits your hypothesis?
That seems an odd question to me. Pick a random organization that is using some FOSS package. I bet you that this organization is not going to be prepared to evolve that system, either due to skills or to other resource issues.
How many companies out there are going to be able to port Python or LibreOffice or what-have-you to a new platform? Most don't have the skillset. But even if they did, they won't have the resources to keep the developer working on that rather than the internal projects. Sure, there are exceptions (and I'm glad there are), but those are just that: exceptions.
Even if the theoretical possibility is there, practical considerations don't allow it.
Most companies do not have the resources to contribute to FOSS, and even fewer companies actually do it.
I see you confuse source being available with being alive and actively developed. Sure, in theory, with source available its possible that someone pick it up and keep it alive. But often you either need a critical mass for that. MySQL may go on forever, but I can walk through the GitHub or SourceForge cemetery to find many dead bodies that will likely never be resurrected.
BTW, for many companies, you can often get the code in escrow. So if they discontinue or stop supporting the product, you get the source code.
That's not the point - they won't be resurrected because nobody uses them anymore.
> BTW, for many companies, you can often get the code in escrow.
On one side you have a mechanism (open-source) that allows anyone interested to pool resources to maintain software after its previous maintainers abandon it. On the other, you see a mechanism under which you can, sometimes, get code that may or may not have been carefully maintained and may or may not have horrendous bugs you'll only become aware when you actually execute the contractual clause that allows you to get the source.
So, can we get the source for Windows 95? It's discontinued, right?
This benefit is a lot more hypothetical than real. Practically speaking you can't continue to evolve Linux 1.2.0 (the version in the Win95 timeframe) yourself either. That's the fulltime job of hundreds of people. In both situations you're relying on those behind the project to evolve them if you want to use them as a foundation of other software.
And as the parent poster mentioned, in the real world you're far less likely to have had to evolve your own software to deal with changes in the underlying OS if you developed for Windows. Win32 apps that use the APIs correctly still work 16 years later. Good luck getting your coff/old-glibc/old-UI-lib Linux app from '95 running on modern Linux...
It also helps that so much of the mission-critical software you run on Linux is also open-source and can be easily recompiled. And you only need to recompile when the software can't run directly: I have Python code running in production that was written originally for Python 1.5.2. Oddly enough, lots of mission-critical applications are written in Python, Perl, PHP and various flavors of shell that run directly on more or less any *nix machine.
The fact is that dead ends in free software are so rare and so easy to get out of, this situation only rarely arises.
When it does, you can always weight your options and, sometimes, opt to go with a fork. Like I said, it has happened in the past.
You've come around to actually making my argument! :)
As I said, as long as I have my installer for .Net, the software I write for that platform will still work.
Not even close. Unless you can recompile your .NET installer for a new computer architecture. If you can't, it's as dead as my Apple II word processor.
I can run that Python code on SPARC box I have. Can you do the same with your .net installer?
Again, since you seem not to understand, there is a difference between being able to install your jurassic .net framework on a current version of Windows and having its source code, something that would allow you not only to run it on current computers but also on almost any future computer that may possibly exist. Forever.
You're simply assuming that someone will continue to port Python 2 ad infinitum. Granted there's no reason to suspect that they will not (Except that at some point, Python 3 really catches on, and decades down the road people stop caring about 2. At that point you could port an interpreter -- but you won't, which is my point).
But there's also no reason to expect that I'll be unable to run my MS-based code. I mean, MS has ensured compatibility for, say, ancient Visual Basic code for forever. Real world experience shows us that there's no reason to be concerned about future compatibility. You're right that in principle it might be left behind, but experience shows us that this doesn't happen. The code I wrote 16 years ago for Win95 (and much earlier, for that matter) still runs.
That's the philosophical argument. The practical argument is Mono. It's open source, so anything good you can say about FOSS applies equally to .Net. It has the backing of both open source and corporate muscle. (although I have to grant that Mono doesn't cover 100% of the platform, I can plan my system to stay within those bounds)
Why is it that you assume that MS will dump backwards compatibility, but you're also willing to assume that someone will be interested in keeping and old FOSS platform alive (and that the tools to do so will still be functional)? It seems that in this argument, you want to have your cake and eat it, too.
OTOH, I still find Mono a bit risky: I think Microsoft is capable of finding a way to kill it if it becomes too popular. They can't avoid it. It's their competitive nature.
With Open Source you can always modify it to fit your needs, long after the original creators stop maintaining it.
I suggest you actually learn the difference between proprietary and open source projects before attempting to continue in this discussion.
But if you have the source, you can keep it going forever, rebuild a community of committers, add bug fixes, security fixes, new features.
Move to new architectures, add new abstractions, build your business on it without worrying that someone will arbitrarily try to kill it off...
Is it FUD? More like a minor exaggeration, but true in spirit.
With Silverlight, they're going to suffer the same fate IBM did with OS/2. First they don't know what they want to do, it was a "me to" product because they thought Adobe was getting too powerful or worried about DOA JavaFX, it might live, it might die, they don't know. Second, they don't know what to tell develoeprs that have invested in it, these people don't retarget to your other technologies, many of them leave after getting burned like that. Mixed messages come out, "they're completely committed" but their actions might not suggest that. The third step, they will retarget it and say something like 'Silverlight is really for x,y,z applications' maybe throw in a little 'we never really intended for it to replace the web or flash.' Ultimately, they pull the plug, any other company might attempt to sell the technology to a 3rd party to try and breath life into it or to make maintenance money from the few true believers that are left, I've not known MS to do that and even if they did, the damage is done once the uncertainty starts.
Just that fact that people are asking if Silverlight is dead pretty much dooms it. MS has 2 options, let it die or show some kind of overwhelming support and double down. Anybody know how many companies have really bought in to it?
And who said anything about open source? I'm not sure I agree that things are as dire, but even if they were that wouldn't affect the argument about MS.
For me, the real question is, why are business still willing to make investments in the latest Microsoft blessed framework?
Actually, I think since Longhorn there are signs that businesses are a bit more sceptical about these bold technology announcements from Microsoft. They used be able to pretty much control the weather but these days they have a hard time getting people to use things like Silverlight, WPF or XNA.
Their dominance is still strong in the enterprise though. SharePoint does not deserve the market share it has. But business are hooked on using the Microsoft tool for whatever it is they want to do. Need an intranet? Well Gartner says we need an enterprise CMS... so we'll hire our preferred Microsoft Gold Partner and give them a budget of 200k to build us a clustered SQL Server environment on which we can run some load balanced SharePoint app servers, all for a two step content approval workflow and a few thousand hits per day. Everyone in that process is too busy making a living to question the "we'll just buy whatever Microsoft has for X" assumption.
And then they talk of TCO and miss profits: they throw money down the drain, what do they expect?
For me, the real question is why are they willing to make bad investments in technology. SharePoint is designed to make it easy to set up a cluster. One of our clients has thousands of users on a single SharePoint front end, because there's "something wrong" with their load balancer. So they don't even do round-robin DNS. Surprise - it's slow!
In other words, they don't even have the ability to run SharePoint well, and you want them to put something together themselves? The SharePoint admin is already outsourced, so it seems like they don't even have the ability to identify quality outsourcing. How are they supposed to figure out a better solution?
Every single day I have to deal with vendors pushing bad technology on the Department of Education because they are trying to make a buck. In cases like this, it is not the businesses fault. They saw something they thought was cool, some vendor told them "sure, we can do that" and a few hundred-thousand later, guess what... it doesn't work.
I have had the same observation, but it seems to me that companies are stagnating on technology rather than innovating on different platforms. They still use Microsoft, but they're using increasingly old Microsoft.
Others in this thread have pointed out that MS has other technologies that are very well supported for long periods of time -- the core C# stuff, and in their OS releases they are fanatic about never forcing you to rewrite your app, sometimes going to great lengths to be bug-for-bug-compatible.
On the other hand, I've taken applications written for Windows 95 and compiled them unmodified on OS's 15 years older.
That said, those who invested heavily in Winforms in 2001 and/or XAML in 2005 are now of course getting shafted - again. And I'm not defending this treadmill that MS keeps people on - it could be a lot more stable. But compared to the alternatives, it actually is quite stable for those who don't always live on the bleeding edge.
Keep in mind the headline is sensationalist, the article itself says that Silverlight code is easily run as an actual Metro application. It's just to get it in the browser, you'll need to switch to 'full' desktop mode.
My understanding was there was a conversion involved. If so I'm skeptical it'll be as easy, especially if you have a non-trivial application using a variety of existing libraries/frameworks.
And the title says Silverlight has been abandoned, but the same skills have been built right into Win8. You should be able to port any SL apps across WinRT reasonably quickly. What has been killed is any notion of XAML being cross platform, but lets face it how many Mac owners wanted/needed to install SL?
In 1993, I'd write a Windows 3.1 app with Win16, a Mac app with the Mac Toolbox, and a Linux app with one of Athena, Xlib, or (I don't remember if this was around at the time) Motif.
In 1996, I would have made some trivial modifications to my Win16 app to make a Win32 app for Windows NT or 95 (and Microsoft made this a lot easier with things like message crackers and typedefs); my Mac app would need a recompile to target PowerPC, but otherwise be fine; and my Linux app still requires no changes either. So far, so good.
By 2001, my Windows app still works perfectly. It'll look dated unless I make some changes to the manifest to indicate that I'm ready for the new common controls, but this is pretty straightforward to do; it's not a big rewrite. I can start heavily using COM without rewriting my whole code-base. My Mac app will have required a port to Carbon--significantly more work than the Win16 to Win32 migration, but not horrible, and my app's now native for the upcoming OS X.
But Linux? Wow. I mean, I guess I can keep running my app--it executes--but no one uses Athena, no use uses raw Xlib. I really need to port to Gtk+ 1.2 or Qt 2.0 if I want to look anything vaguely close to acceptable. And this is a major rewrite for me to undertake; they look very little like their predecessors. Yes, the toolkits are open-source, and yes, they still work, but let's be real here: I need to rewrite.
Let's move forward to 2005. My Win32 app still works, without changes. Granted, I might want to start using the nascent .NET, but Managed C++ and COM interop makes that pretty easy to accomplish piecemeal. On the Mac side, the writing's getting on the wall that Carbon's going to die, so I'm going to want to start porting to Cocoa, and that amounts to basically a full rewrite. I'm also going to want to start porting my app to Intel.
And on the Linux side, I've got to move to Gtk+ 2.0 or Qt 4.0. And that's again a big deal. If you weren't using Gtk+ or Qt back then, you should know that it was (at least in my opinion) a relatively large amount of work to upgrade. Not a full rewrite, mind you, but Qt redid piles of classes, methods, and hierarchy, and Gtk+ introduced glib, reworked signalling, themes, and tons else I'm forgetting. It was a lot of work.
Jump to 2010. My Win32 app fucking still works. Managed C++ is really mature right now. And if I wrote C# or VB or anything in WinForms, it'll also still work, and it can use COM objects I export from my old C++ app, and expose COM objects to my old C++ app. My Mac app, rewritten in Cocoa, still works fine, as long as I ported it to Intel, too; otherwise, users can't easily run it without installing Rosetta, an automated, if separate, install. I'm also going to be seriously thinking about trying to get it also running on Cocoa Touch for the i* devices, which have a radically different GUI toolkit. On the Linux side, the GUI of my app should still be in fairly good shape, although G-d help me if I did anything involving sound or 3D effects, since that API's gone through many wholesale changes in the last few years. And if I didn't write plain Qt and Gtk, but instead using GNOME and KDE, then...wow. I've had a lot of pain. I went through to KDE 4, which was a huge deal.
Now, we're leaving out that, in the 2005-2010 timeframe, Microsoft introduced WPF and Silverlight. So let's assume, in addition to my old C++ app that still works, I've now also got a few Silverlight and WPF apps I've written.
Now it's 2011. What happens on these three platforms?
On the Mac side, I'm in the same place: I need to be really thinking about Cocoa Touch, but my existing Cocoa app still works fine. If I didn't port to Intel, my app is dead, but that wasn't hard, so let's assume I did it.
On the Linux side, having gone through the KDE 4 transition, I'm now dealing with the GNOME 3 and/or Ubuntu-being-a-weirdo transition.
On the Windows side, in Windows 8, my old app still runs. My Silverlight apps and my WinForms apps and my WPF apps still run. The bad news is that they run in the legacy Windows desktop. So now, what has to happen to fix that?
My C++ app's GUI is fucked. Total rewrite. Really total, even worse than the Gtk+ 1.2 to 2.0 transition. WinForms, too. They're not coming to Metro.
But my Silverlight and WPF apps? Are you serious? It's very little work to get these running on Metro. Metro uses XAML really heavily, and has a very Silverlight-like view of the .NET libraries, actually. Most of the changes I'm going to make are mechanical and look-and-feel. This isn't trivial, but it's at worst a lot easier (IMHO) than the movement from Gtk 1.2 to 2.0 or Qt 3 to 4.
So you know what? At the end of the day, Microsoft looks really good to me. Great binary backwards compatibility, and really good toolkit compatibility. They've given me a really clean migration path from Silverlight/WPF to Metro that does most definitively not require a full rewrite. This is better than both other platforms listed above.
Me? I'll take it.
I'm in awe of the backward-compatibility efforts that Raymond Chen and crew have put forth and explained in their blogs. No doubt things could be much worse. But I'm still tired of fighting Windows APIs, both as a developer and as an end user.
However, Microsoft is not the only one playing with the developers. Just take a look at Google and their API deprecations of API and game rule changes for Appengine.
No. I'll still have to venture off and find an alternative or just deal with it.
Or go nowhere, if you think the software does whatever you need. When you have lock-in, it's always the vendor's decision and if you don't like where it's going, you have no choice. When you have open-source, the decision whether you continue with the product or find an alternative is yours - you are free to do whatever you want.
Besides that, folks can continue to leverage their XAML (for UI) and C#/VB skills for building even new Metro style apps.
XNA survives for now (it's barely 4 years old, still has the time to die), but that might be because it's part of the XDK (will it survive the transition to MS's next console?), but apart from .Net itself there's little which endured, and the more these technologies were initially heralded as saviors the earlier they got shitcanned.
A past push that sticks out to me most is the "3 pillars of Longhorn (Vista)". The vast majority of "the 3 pillars (WPF/WCF/WinFS/InfoCard)" are now useless.
We should all remember Vista/Longhorn as something that had an equal amount of marketing, but failed horribly. Anyone remember the Windows Vista UI Guidelines? A search box and back button in every application? The sidebar? Have a graphic designer design your application's UI in XAML (oops, they're still trying to push that one)? Maybe Windows 8's new stuff won't fail, but we can't tell at this point. This is also the same thing I say about Windows Phone, Azure and even something as heavily supported as the Entity Framework. You just never know when funding will be cut or if the product will die.
For me, with all this in mind, I do the bare minimum just-in-time learning to keep up with the day job, especially as it comes to frameworks and new product pushes. Fundamentals such as OO concepts or any kind of universal, timeless concepts, I'll study. But I won't bother with buying the newest WinRT book when it comes out because hey, why bother. Even if I do study up on one of these topics, I'll do it knowing that the platform may die at any moment. I'm sure WebOS folk or niche phone developers (Symbian OS?) know the feeling.
The vast majority of .NET developers effectively do the same regarding learning Microsoft frameworks, only they're more quiet or less honest about how little proactive studying they do, and usually won't own up to the truth. All .NET developers are 'behind the curve', at least as the curve is re-defined every new marketing push.
With some work, old DOS and early Windows games _still run_ on a modern OS.
Who else does that?
You know how new technology should emerge? Thirdparty push them, not providers allow them.
If you want to use CSS3 tools, even those that are widely supported across browsers, you often have to code up the same effect 3/4/5 times with slightly different syntaxes.
If you want to use HTML5 elements, even just the simple semantic ones, you need to incorporate backward compatibility fixes for IE before version 9 (which is a very substantial chunk of the browsing public, since WinXP only goes up to IE8).
If you want to use the more interactive HTML5 elements, it's even worse. HTML5 video won't be a credible successor to Flash video until there is a standardised, high quality a/v format that all browsers support. Even then, the simple fact is that Flash video players offer way more powerful customisation and some features involving content streaming and protection that are still commercially relevant even if the "everything should be free" crowd want to bury their heads in the sand.
We have SVG and HTML5 Canvas technology to do drawing without relying on plug-ins these days, except that when you actually try to use them, you find that they aren't always supported to the same extent on all popular browsers, particularly the mobile ones where avoiding the weight of downloading a plug-in seems most advantageous of all.
Did I mention web fonts? (Not that it matters until the on-demand font services realise they aren't special and most of us are never going to rent a font instead of paying one-time fees like we do for stock images, blog templates, etc. Or until the rendering on some platforms means most of the fonts actually available don't like the rear end of an animal with stomach problems. But hopefully those issues will pass, at which point having a common font format would be handy too.)
All the while, Mozilla and Google are having a pissing match to see who can release some bleeding edge thing faster than the other, but since no-one else supports it the same way yet, it's basically worthless. Meanwhile, they keep pushing releases every few weeks, and they do break things, often those very HTML5 and CSS3 features we're all meant to love that are supposedly driving web technology development. (Google screwed up rounded corners for several successive versions. Firefox still doesn't get basic font rendering right on a lot of platforms, and the current version can't even look up locally installed fonts properly. Don't even start trying to do tricky things like using Java applets -- and before anyone chimes in about obsolete technologies, remember that there was a big push to improve support for applets not so long ago, and that many state-of-the-art programming languages run on the JVM.)
The recurring theme in all of this, of course, is that you can't write one thing that runs reliably everywhere today. Until you can, any so-called standards will just be marketing propaganda for one browser team or another.
Or you can wait a little until the standard goes through and the syntaxes are unified.
The target is not the browsing public -- it is your audience. For instance, a site for hackers really doesn't need to support IE<9.
I agree with the <video> problem for sure. There are philosophically and commercially opposed interests involved there.
Firefox still doesn't get basic font rendering right on a lot of platforms, and the current version can't even look up locally installed fonts properly
Could you cite or give examples of these?
Obviously things are still in flux, but we've all learned to deal with bugs in software. We know our own code isn't immune to them, so I think it's too much to expect the platform we're building on to be immune to them either. I don't think bugs reflect negatively on the web standardization process, though.
The trouble is, the current standardisation processes are far too slow to be practically useful, while the breakneck pace of experimental feature development is too fast to keep up, so there is no useful definition of "a little" in your suggestion that we can use for planning new web projects and making decisions about which technologies we will rely on.
> There are philosophically and commercially opposed interests involved there.
It also doesn't help that both sides throw out FUD as if it's going out of fashion, and that some of the (acknowledged) patent encumbered formats are technically far superior to the (possibly) unencumbered ones. That means until either the software patent madness is fixed or organisations like Mozilla start putting up hard cash, their integrated video functionality will never be as good as what you can get in something like IE (or, apparently, Chrome, since Google seem to have quietly dropped their policy of discontinuing support for H.264).
> Could you cite or give examples of these [font bugs in Firefox]?
On Windows systems, the kerning is very odd in a lot of fonts since they moved to the new rendering engine a while back. The spacing following a capital T is usually way too tight, and in some fonts there are a few other examples as well. This can render pages completely illegible at typical body text sizes.
Also, take a Windows 7 computer with a few Adobe professional OpenType fonts installed locally -- probably some that come with Creative Suite would do -- and just try selecting those fonts using CSS in recent versions of Firefox. It simply doesn't work and falls back to the next font family in the list, though the same page will find the fonts and render quite happily in other browsers on Windows 7, or in the same version of Firefox on Windows XP for that matter.
Most currently available web fonts also seem to look terrible (poorly hinted, terrible aliasing) on Windows XP in Firefox, but since they don't look great in other browsers either and they look better in Firefox on Windows 7, I'm inclined to point the finger more at Windows XP than Firefox in that case.
If you're starting a new web project today, you look at what technologies the browsers you're targeting already support with unmodified syntax. (I'd argue that that development model's a poor fit for the web, but that's a different question.)
That is an artifact of ClearType subpixel positioning. Firefox 7 or 8 onwards (I don't remember which) turn subpixel positioning off for several commonly-used fonts, listed in gfx.font_rendering.cleartype_params.force_gdi_classic_for_families.
Also, take a Windows 7 computer with a few Adobe professional OpenType fonts installed locally -- probably some that come with Creative Suite would do -- and just try selecting those fonts using CSS in recent versions of Firefox.
I have a feeling you're choosing the font incorrectly. Note that with DirectWrite, any weight modifiers ("semibold", "black", "light") aren't part of the font name any more. Instead you need to specify weight using the font-weight property. So instead of saying Arial Black as you would normally, you need to say Arial with weight 900: http://www.neowin.net/forum/topic/971376-firefox-displays-my...
This is not a bug, though -- this is correct behaviour.
That's because most downloadable fonts aren't hinted for ClearType. DirectWrite with its subpixel positioning can cope much better with unhinted fonts.
If we really did that, we'd be back in the land of HTML4 and CSS2.1 and, ironically, Flash and Java applets.
I think the reality for most projects today is that we make a decision about how much portability pain we're willing to accept in return for using newer technologies to alleviate other pain such as making yet another set of rounded corner graphics, and we make a decision about how much degradation in the user experience is acceptable for older browsers that don't support whatever non-portable technologies we choose to adopt.
This is a far cry from what web standards should be, of course.
> That is an artifact of ClearType subpixel positioning.
That may be so, but the fact is that for several months lots of people using Firefox will have had difficulty reading lots of sites that worked perfectly well in older versions of Firefox and still work perfectly well today in other browsers (which makes me very suspicious of pinning the blame entirely on ClearType, BTW; no other software on these PCs has any trouble displaying those fonts).
If you're going to cause that kind of regression, then IMHO a policy of trying to force all of your users to update continually and ignoring the needs of those who can't or won't is inappropriate. Particularly in a security-sensitive application like a web browser, I think forcing users to choose between recent versions with vulnerabilities patched or old versions that actually did their job properly is irresponsible. After all, isn't this one of the reasons plug-ins are supposed to be Very Bad Things?
> I have a feeling you're choosing the font incorrectly.
Given that there is no standard specification of how to choose fonts like this, I think that's perhaps a rather bold claim to make (no pun intended). (Go ahead, check any W3C recommendations you like, you won't find it. In fact, the CSS2.1 Recommendation specifically acknowledges the variations in things like font weights between different families. It also provides no specific advice on naming conventions for font families at all, because what would you say that makes sense anyway?)
Again, it doesn't really matter though, because the point is that it worked on every other browser. Whether they are tolerant and Firefox is correct or they are correct and Firefox is broken doesn't change the end result.
> That's because most downloadable fonts aren't hinted for ClearType. DirectWrite with its subpixel positioning can cope much better with unhinted fonts.
Sure, but hundreds of millions of people are still browsing with ClearType, so I think trying to get professional sites to use web fonts that look rubbish in comparison to tried-and-tested screen-optimised fonts like Georgia and Verdana is one of those "Oooooh, shiny" policies that is more driven by hype than substance.
And for the record, a lot of the fonts available on the commercial font rental services render poorly even on high-resolution screens using the latest browser versions on Windows 7. I suspect the font services and foundries are banking on higher-res screens of the kind we're seeing on the latest smartphones making the need for traditional screen-font hinting obsolete. But this is drifting off-topic so I won't follow that avenue any further.
Read my words carefully. I did not say it is an artifact of ClearType. I said it is an artifact of ClearType subpixel positioning. Subpixel positioning is only available with WPF and DirectWrite, and very few Windows apps use those APIs. IE9 has exactly the same "problems" as Firefox does.
Given that there is no standard specification of how to choose fonts like this, I think that's perhaps a rather bold claim to make (no pun intended)
By "incorrectly" I meant "in a way that Firefox doesn't recognize". You don't blame Firefox when you misspell a variable name in a JS script, do you? Similarly, if you specify a font that Firefox doesn't recognize and it doesn't work it's not a bug in Firefox.
Again, it doesn't really matter though, because the point is that it worked on every other browser.
It worked in Firefox too -- the next font in the list got selected and the text still got displayed. The web platform provides no guarantees as to pixel-perfect rendering. If you want that you should publish a PDF instead.
Whether they are tolerant and Firefox is correct or they are correct and Firefox is broken doesn't change the end result.
It still works in other browsers, and it still works in Firefox. It has nothing to do with tolerance and everything to do with Firefox simply using a different API to enumerate fonts.
As for the remainder of our discussion, my point is simply that you can't do a lot of basic stuff like font selection in a standardised way today, and the constantly moving goalposts are not helping. Falling back to the next font in a list if a font isn't installed is a potentially useful behaviour. Falling back to the next font in a list in some browsers when the font is installed and other browser do find it means we haven't defined the standard for identifying fonts clearly enough.
Do you have a test page? If it happens on Windows XP too then it's probably a bug. Did you file one?
I have no idea whether a specific bug has been filed. I'm afraid I have given up trying to help Mozilla, since they seem to do just about everything possible to make it difficult to do so (insanely overcomplicated bug tracker, can't even download the past few releases any more, etc.). At current rates, I expect them to last about as long as RIM, so I devote my limited spare time to helping other projects instead.
I haven't seen anything close to this, especially on Windows XP. Do you have a test page, a screenshot, anything?
can't even download the past few releases any more
OK, I can't assume good faith any more. You're outright lying now. https://ftp.mozilla.org/pub/mozilla.org/firefox/releases
At current rates, I expect them to last about as long as RIM, so I devote my limited spare time to helping other projects instead.
Well, the top result if you Google for "Firefox kerning capital T" is this report from a few days ago, complete with screenshot:
For the record, I'd already hacked one about:config entry about HarfBuzz (obviously something most users won't know how to do) to fix an earlier related problem, and it sounds like I now need to hack another one to fix the same problem in more recent Firefox builds.
> You're outright lying now. https://ftp.mozilla.org/pub/mozilla.org/firefox/releases
Well, thank you for the link. I can honestly say that despite using Mozilla browsers for many years and trying to file a bug on numerous occasions, I have never come across that FTP site. It sure as heck wasn't mentioned anywhere usefully prominent on either the main web site to download Firefox or the pages relating to filing a bug last time I tried, and I did spend several minutes looking.
I will just reiterate here that it wouldn't matter to me anyway now, though, because the one previous build I did find and install when trying to report a bug a few weeks ago (the latest 3.6 series one) basically screwed up my entire add-ons configuration in my current (then v5) Firefox build, something it had no reason to go anywhere near when I was just trying to do a clean parallel install to check for regressions in a particular area. If I'm going to volunteer my time to help out, it simply isn't worth risking the hassle of reconfiguring my up-to-date Firefox installation (which I use for actual paying work) any time I want to check for regressions since an older version.
No, honest opinion. I think the Firefox team's repeatedly demonstrated attitude to fast releases, new features vs. quality control/regression testing, and generally providing a sustainable, reliable platform useful for business applications, is fundamentally flawed on a management level. Their reliance on Google for almost their entire income stream is also fundamentally flawed on a commercial level, given that it is ultimately in Google's commercial interests to move more people onto Chrome and lock them in by using Chrome-specific features in Google's web offerings. If your management/PR and your commercial set-up are undermined, it doesn't really matter how good you are technically.
Anecdotally, I have been in two meetings already this week where director-level people (that's CxO level people for those of you across the pond) in medium-sized companies have made policy decisions that Firefox support is no longer to be considered a priority for their web development work (which is a significant part of their business in each case). The reasoning was much the same in both cases, and the same as other meetings I've been to recently: the amount of time that developers have been spending working around regressions and incompatibilities in recent months can't be justified when you don't know what will break again or be fixed anyway in the next release less than three months away, and when any claim of support that can't be relied upon for business-level timescales isn't worth anything in the market anyway.
More objectively, look at any reputable measure of market share in the browser space. Firefox hasn't been going anywhere for quite a while; if anything, it's dropped slightly according to some sources. Meanwhile, IE has been losing share as fast as ever, Chrome has been racing up faster than any browser in history, and several of the "minor" browsers are grabbing enough of the pie to register. That is not a healthy picture for Firefox.
I remember when Microsoft was the company that made standards, now they seem be playing catch-up with the rest of the industry.
I remember when C#/VB.Net first start competing head to head and it quickly became apparent that C# had won even though there were originally more VB devs.
Even the code examples coming out of MS shouldn't be noted, they used to pump out a lot of VB.Net examples when everyone else was using C#, which wasn't too bad as you could convert them fairly easily.
The more I think about this, the more it looks like it's going to be a messy fight. If you want to see what's winning, watch the questions on StackOverflow.
BTW, they've also created a lot of vendor specific CSS extensions including a grid, so they are attempting to get out of document mode and move html forward. I know your gripe though, always irritated me too.
Yes, I know, Microsoft was clearly the one who pushed for CSS3 grid positioning, but it's also a standard that can be adopted by any other browser and, as someone who has used it a bit, I really hope it will be.
I guess it all depends on if it's accepted before win 8 officially launches.
cross language comparions are bogus and their results can't be extrapolated to all apps, but these examples show that V8 is much closer to raw C perf than you might expect.
I mean it may be a good UI for tablets, but I did not buy two 24" widescreen monitors to run IE in fullscreen. When I use my PC I don't care about active tiles or what's the desktop at all. I have apps to run and work to do.
And I am not about to replace my mouse & keyboard with touchscreen on the desktop anytime soon (if ever).
Simply put tablets and PCs are not the same thing and are not used in for the same purpose. They need different GUIs.
The main 'revolution' in this Metro UI is that it's a tiling window manager, one that was made user-friendly with gestures to manage tiles. It leaves the WIMP desktop metaphor to an easily accessible legacy mode (and MacOS). Tiling WM are not especially keyboard-unfriendly, nor reserved to small screens.
Personally, I think Microsoft is moving in the right direction.
Now instead of moving towards Y or inventing a better Y. What you basically do is over do Y and project it as a solution to every problem the user ever has.
The Tablet and phone need a UI that can be easily used with a 'finger touch'. The Desktop needs a UI that can be easily used with a mouse for bulk work. These are two different use cases for different devices.
Just because iOS is cool, Apple didn't start using the same for Desktops and laptops. On the same line Google has a separate OS for OS for net books(ChomeOS) and separate for mobile devices(Android).
Also the market is heading in a different direction when it comes to mobile devices. Its not about selling an OS! Its about giving away OS for free or a nominal charge to sell awesome hardware or ads on your search servers.
UI solves only one part of that problem. And apps? What about apps?
I expect pretty much the same from Metro. It will be adopted for certain uses: small and touch devices. For the rest I doubt it'll get in the way.
I believe Microsoft is trying to atrract developers to the new interface, but "native Windows" is there to stay.
Windows 8 has the metro UI, as well as a win7 like UI to use with mice, and keyboard.
Don't like the Metro UI, just run the regular windows-looking desktop
In other words, Microsoft means to say "please use Metro, but if you have legacy code don't be angry with us, 'cuz you have the desktop UI option... but not on all devices"
So the legacy mode will be available on both intel, and arm devices, which I believe will make up all windows 8 devices.
'Legacy' desktop applications would probably need to be recompiled to run on arm devices.
This isn't Windows 8 RTM, this is Windows (7-to-become-8) Developer Preview where they show off where they are heading, give you a chance to get a feel for it, get a feel for the design-language Microsoft is putting out here and allow you to adapt your applications to this new format.
Because this is really such a new format, you need to play with it to get it.
So yes, this is very incomplete. In it's current state it is not intended as a production OS to replace Windows 7 either. It's a developer preview. It's supposed to bring developers up to speed on the Windows 8 platform and how you build for it.
Lots of people are treating this as a beta, almost like an RTM, and I honestly think people doing that is missing the point entirely.
This move is far more interesting, surely, as confirmation that Flash has lost the browser wars and that web standards will determine the future of web applications.
If you make a living as a Flash developer, it's long past time to start learning about the web.
Then Microsoft joined the RIA fray with Silverlight, battling for control of the web. Flash was everywhere, SL up-and-coming. It was meant to be a spectacular battle, but both companies were blind-sided by this little web standard thing which was about to crash the party...
Except that it didn't. The war didn't end because someone won, but because Steve Jobs said so. It was only an illusion that HTML5 renders the RIA plugins irrelevant. In reality, it was the emergence of native apps. First iPhone, then Android, iPad, and now even Windows/Metro. Who cares about rich cross-browser apps?
What's held back web apps is not Steve Jobs, but the fact that creating them is incredibly painful and produces results that are barely acceptable except in a very small number of cases where the application is effectively a web page to begin with. The standards and libraries have evolved extremely slowly due to the difficulty of pushing out browser support across such a wide range of slowly-updating platforms. What counts as amazing in the average web app today wouldn't have been an impressive native client application in 1992.
1. Netscape introducing navigator
2. Microsoft stopping development post IE6
This has actually given the web browsers a modicum of stability and people were able to develop against a stable medium, and allowed Firefox to catch up in terms of compatibility.
In order to regain its Windows franchise, it needs to reimpose the Windows tax.
Firstly, by making Metro IE10 plugin-less, it kills Adobe Flash as a navigator-pretender.
Secondly, by introducing a lot of IE10 specific extensions, it hopes that developers will start to make use of these, eventually leading to the balkanization of the web, with MS having the highest share of desktops, it hopes it can buy another 20 years of windows tax.
Thirdly, Apple's experience has shown that without plugins, developers will either have to choose between HTML or Apps. Now Apps are a great way to create lock in. The existence of plugins threatens that.
The jury is very much still out on that one.
Plenty of web sites are still Flash-based at least in part, and big companies are still throwing serious money behind them, and this is several generations of iDevice-without-Flash later.
Meanwhile, subjectively it seems like the number of complaints from people using iDevices when they can't use a site for this reason has been growing a little, at least on the forums I follow.
I imagine Adobe are well aware of this, as they have been promoting a lot of alternative (and potentially much better) approaches to their traditionally Flash-owned territory lately.
Right now flash penetration continues to be massive.
You could argue that Flash and Silverlight were never particularly appropriate either. Cue native apps...
Very few people are doing client-side java these days, but those are making some rich, immersive experiences, across platforms, and with push deployment.
Having said that, client-side java is hard; No "authoring" tools, all UI & visualization is made by programmers .. not exactly the most discerning of aesthetes ;-)
I should probably be paying attention to this, but is Metro HTML5/js only? Does that mean Chrome and Firefox are basically dead on Windows 8 (unless you leave Metro)? And what about anti-trust lawsuits against IE??
So for web-browser, this is not a matter of "porting". It's a matter of adding new "chrome" for WinRT.
Even without MFC or Webform, there is quite some work to do.
I was a little disappointed - the title had me all excited, like "Ooh, IE10 is going to be amazi ... oh wait."
Still, it does look really cool, I for one am excited. I'm not switching from Chrome, though.
Please do correct me if this has changed and I am wrong.
"In Windows 8, IE 10 is available as a Metro style app and as a desktop app. The desktop app continues to fully support all plug-ins and extensions." (emphasis mine)
Thus, the end of SilverLight was just a question of 'when', not 'if'.
See this: http://blogs.msdn.com/b/b8/archive/2011/09/14/metro-style-br...
"For the web to move forward and for consumers to get the most out of touch-first browsing, the Metro style browser in Windows 8 is as HTML5-only as possible, and plug-in free. The experience that plug-ins provide today is not a good match with Metro style browsing and the modern HTML5 web.
Running Metro style IE plug-in free improves battery life as well as security, reliability, and privacy for consumers. Plug-ins were important early on in the web’s history. But the web has come a long way since then with HTML5. Providing compatibility with legacy plug-in technologies would detract from, rather than improve, the consumer experience of browsing in the Metro style UI."
Can you tell me how I record video or audio using HTML5 only? Even playing more than a single audio sample and getting them to sync is a challenge.
It's not implemented anywhere yet, but it is part of the spec.
Think about it. Now if you are Microsoft, you have to decide whether your vision is good or if you should "listen" to what the market is telling you and shift your strategy. Glad I don't have to make that decision.
You'll still be using the same XAML/C#/etc