But what about all the rest of the computers? You know, those big things on your desktop? We all know just how smart most users are when it comes to figuring out how to use stuff on a computer. Its taken years to get to the point where your typical "user" can use a computer without too much difficulty. Part of this is due to the consistency of the Windows UI; folder windows, mouse motions, menubars, etc. Suddenly, we're going to throw something completely new at them, something which:
1) Has been shown to be difficult to do well for everyday use (touchscreen desktops)
2) Has a moderately non-intuitive interface (hidden UI elements until I swipe from a particular side, or maybe tap over here and here and here)
3) Has questionable benefits in a desktop computing environment where a keyboard is a perfectly appropriate device
And, once they learn all that, our user realizes that he still has to use the original UI for many programs! That's right, photoshop isn't going to be going Metro anytime soon, and neither is Matlab, or Autocad, or Excel, or video editing software, or any "Pro" program (for lack of a better word). Metro may be wonderful for tablets and other mobile devices, but it sure looks like it's going to be a drag in being forcibly married to the traditional Windows UI.
So in many cases I think you're actually talking about different users: the majority of people may eventually not need anything beyond a tablet. I also suspect the rest of us will be annoyed and inconvenienced by this trend, since it won't be aimed at us.
It feels awfully 1984 for me.
I live in China and I can tell you it is painfull, sometime, when someone else chooses for you what you are allowed to do. I'm reading "Zen and the Art of Motorcycle Maintenance", and heck, this guy is right, you have those losers who fear and get used by technology, and the other guys who don't fear and use technology.
The cynic (borderline totalitarian) approach is to say that people should be divided between those clever hacker who have control and the rest, which should not be given "too much" control, because their are not clever enough, or don't need it, or what.
The humanist (democratic, cf "Enlightment") is the opposite: human being are clever, and should be given control on their life, on their tools, should be given choices, etc.
It seems to me that too many people have forgotten the we gather here, on HN, on the free, interesting and open parts of the Internet, because we like to control our shit, and ALSO we prefer the common people to be educated and control their shit. Especially when it come to so personal matters as their own files.
Sorry for the rant. I hope I'm mistaking, but from my recent reading of HN I feel there is a progressive shift toward more "Gruberness" which is questionable in its hypothesis and conclusions.
Computer owners who don't understand their computers do not feel free. They feel a constant sense of discomfort, even dread. They fear making mistakes, they worry that the box will break at any moment, and they're embarrassed that they have to ask techies -- often expensive techies -- for help.
Whether one falls into this camp, or the camp that you're talking about is probably just up to personality types.
But there are a few other ways to look at it:
- Chaotic vs. orderly.
- A pile of laundry vs. neatly folded
- Unfiltered vs. filtered
- Amateur vs. professional
- Unedited vs. curated
Pick whichever way you want and your perspective might be different. It's really in how you define the context.
As far as I'm concerned the app store is a shopping mall and the proprietors have chosen to limit what kind of businesses can open shop there (no head shops or brothels). There are shoppers who are happy to go there and those who don't like the brand-name selections. There are businesses who are prospering happily and those who are struggling.
And then there are those who like to stand outside with pickets and complain that the management is squelching free speech.
Comparing it to dystopian nightmares and totalitarian regimes is quite a stretch.
What I am not ok with is that the makers of a machine I buy keep a too strict control on what I can do with this machine I bought. We are grownups, we should be allowed to use the things we buy in the way we please. As far as I know, if I do something illegal with a Porsche or an iPad, their respective makers are not responsible of that misbehavior, and therefore should not be entitled to forbid me from doing it.
You are correct, but I am unclear as to what you could be referencing. You can add pirated MP3s, eBooks and movies to iOS devices, if you so choose. You can even jailbreak them to customise them even further, provided you don't also expect warranty service.
Not if Apple had their way and were able to win that arms race.
>I am unclear as to what you could be referencing.
You're _playing_ the fool here I hope.
I put rockbox on my iPod classic, I love rockbox. Apple encrypted the device in a way to prevent aftermarket modifications and set development for that device back years while they worked to get around the restrictions apple put in.
That kind of practice is what he is referencing.
Making a product do something the standard setup will not allow is the whole point of the hacker space anyway, isn't it?
PS: I don't agree with Apple's decision to encrypt the standard (non-touch) iPod line. I wonder if it was part of their licensing terms with movie studios.
We need, occasionally, to draw a line of abstraction over complicated stuff so we can build upwards. This is why we code in high level languages rather than toggling switches on front panels, even though it meant surrendering some "freedom".
And selling pre-built computers doesn't mean you have to glue the box shut. You just have to make sure people don't have to mess or even think about the internals unless they really want to.
E.g. The "give us 30% and don't undercut" rule means that the user can buy from the app store without needing to price compare. The "no execution of outside code" rule means no platform subversion with the endless security issues this would bring.
If you stop thinking in terms of technical capabilities and start thinking in terms of the user's cognitive load, iOS is a huge step forward in terms of abstraction.
You describe it as a struggle between totalitarian and democratic ideologies, with all the moral and ethical implications. I think it's really about a third, much more powerful force: giving people what they want.
This is tangential, but I think explains what I mean better than I could:
However I am extremely opposed to entertainment feeding, I don't watch TV, nor do I read magazines, so I agree with Huxley (and you?). But I don't think it is closely related to my impression that some people go too far in the "don't say yes to all" direction.
The good reason for tool creators to say no is not because they know better how the tool should be used, it is because they know that saying yes to request A would forbid the tool to be used in ways B and C, and other unprospected ways. Saying no is ok when it actually opens new doors.
What people want depends, to a large degree, on their level of education, what they're exposed to, and what leaders and the media tell them they should want. It will be interesting seeing how pop computing and lock-down devices do globally in the long-term.
Don't get me wrong. I regularly drop back to the command prompt, and hate having the flexibility of a full system taken away from me. But I also hate that feeling I get when random relative calls up to ask for help with their latest tech disaster. I hate it that configuring operating systems so that they are reliable, robust and functional is almost impossible. I hate it that the operating systems still get in the way of just doing the job.
We're hackers, hobbyists and enthusiasts, we want the computer to be in our faces, that's what we do. The vast majority of users however, hate it when they have to put up with the crap we think is cool just to read an email, or write a document.
Can't remember who the quote is from but it goes something like this,
Technology goes through three stages,
simple with limited functionality,
complex and unreliable,
simple and reliable....
I have a feeling that we have started to reach the end of the second stage.
> "the advantages that come from NOT allowing you
> to do so many [things]".
I don't think that the he was talking about a curated AppStore (with a 'totalitarian' gatekeeper), which seems to be what you're talking about.
Now we are told that iPad don't allow background process and that it is a good thing. If the only reason is battery life, then it is not good enough a reason for me (I have two spare Nexus batteries in my pocket, what is the problem?). Somewhere else we are told that we should not expect direct access to our own files because "files are so 1990", or because iTune will handle them. I don't agree with that view. There ARE files behind the pages or handles or icons I see on my screens, and they are MY files (pictures, docs, etc.) It is ok that the main use of this content is done through some cloudy vapored tool, but I need the possibility to grab the file, MY file, and make what I want with it (including, obviously, destroying it).
He is so incredibly apologetic to the corners that Apple engineers cut that I wonder if he really believes himself.
If it's a question of specs or software maturity then it really is just a matter of time.
Surprisingly, it wasn't nearly as painful as I expected. It didn't take me long to write the letter (the default templates helped a lot), making edits was fairly easy and I had no trouble being able to read over what I had written.
So for small edits to existing files and quick docs it's usable, though clearly not as good an experience as using the same suite on an iPad.
It'd be interesting to see how such a thing would differ from Google Docs, and if it would justify the kind of prices MS likes to charge for Office. An Office app for lightly-spec'd computers makes sense, but you'd have to charge app prices for it and I don't see MS sacrificing their second biggest cash cow any time soon.
At the moment, Google Docs is Office Light. It offers a limited subset of Office's features, and doesn't do it particularly well. What it does do, extremely well, is make it easy to get the documents you create to your friends and co-workers. I can't see a limited-functionality, offline version of Office competing with that.
However, I'd extend that and say that along with the "key" is a smartphone that can also (in a limited fashion) view/manipulate your cloud data and send/receive messages.
The important thing to recognize is that there will always be need for "more and faster" than a smartphone will be capable of providing.
1) The rate at which cooling tech increases; How can you keep a miniature multicore cpu cool enough to process gigantic displays?
2) Power efficiency of the miniature multicore cpus.
I'd actually be surprised if there isn't a fully staffed team working on a Metro version of the Office suite.
Remember, we're talking about regular users, here... these people (broadly speaking) never use keyboard shortcuts. Everything needs to be clickable (touchable) with the mouse (finger).
> There needs to be a way to switch slide layout, add elements to the slide, work with animations & transitions, reorder slides, etc. And that's just some of the things that users will want to do all the time, on almost every slide. There's just no way to have that sort of rich interface in Metro.
See link. It's perfectly possible to do all that without a classical menu bar. Does it do everything the desktop version does? No. Can it do pretty much everything useful of it? Sure.
No, not really. It's great for presenting, useful for making small tweaks to existing content, and painful for trying to create presentations that go beyond stuffing text in the provided templates.
This is also true of Numbers. Pages is pretty good though.
You must've missed the "charm", sliding & docking side-windows part of the Metro demos at BUILD.
I can remember my high school computer studies teacher explaining they're called windows because they're split into panes. Microsoft Windows versions 1 and 2 didn't actually have stackable overlapping windows (you'd split the screen up into as many panes as you needed) so I suspect the name made sense then and has since stuck.
EDIT: Though now that I think about it and, IIRC, Xerox's stuff didn't have overlapping windows either. Apple's Lisa and Mac did. I think Englebart's demo had windows but I'm not sure if they overlapped.
Woz was amazed that it was possible to have overlapping windows on such a resource constrained machine. But if Xerox engineers could do it, he could to dammit! After a Herculean effort he got the overlapping windows to work nicely and they shipped.
It was only later that they found out that Xerox's engineers were blown away by Woz's achievement as they had studied the problem and decided that it wouldn't be possible to do overlapping windows. They never developed that functionality.
Turns out that Jobs had made a mistake in remembering what he had seen and Xerox never did have those overlapping windows :-)
Anyway, this story might be incorrect on some points or a complete fabrication, but I like it.
"Smalltalk didn't even have self-repairing windows - you had to click in them to get them to repaint, and programs couldn't draw into partially obscured windows. Bill Atkinson did not know this, so he invented regions as the basis of QuickDraw and the Window Manager so that he could quickly draw in covered windows and repaint portions of windows brought to the front."
Steve Jobs know how important regions were: http://www.folklore.org/StoryView.py?project=Macintosh&s...
In a semi-related matter, does anyone know of the algorithm Bill used to draw RoundRects 'blisteringly fast'? http://www.folklore.org/StoryView.py?project=Macintosh&s...
According to the Apple stories I've read, Wozniak had little to no involvement with the creation of Macintosh. He was in a plane crash in 1981. When he came back to the company in 1983 he was mostly involved with the (increasingly sidelined) Apple II team.
But overlapping windows first came to Apple with the Lisa, anyhow. Some of Bill Atkinson's photos here show them from December 1979 or early 1980. Story says Bill wasn't sure whether they predated Apple's Xerox visit, or not. http://www.folklore.org/StoryView.py?story=Busy_Being_Born.t...
Anyhow, it's a good story. If it has some basis in truth it might be with s/Woz/Atkinson/ (he was their UI/graphics guru programmer) and s/Macintosh/Lisa/. :)
EDIT: The Alto could run SmallTalk as one of its environments, though, and its windows overlapped as of SmallTalk-72 or -76
edit reading through the comments shows a commensurate level of better discussion than the usual Gruber response as well.
He's a sportswriter, and his home teams are Apple, the Yankees, and Helvetica.
Rooting for dominant winners like that isn't so bad in itself, but he does it as if they were still downtrodden losers in need of defenders — that's the core of what makes him so insufferable
"you can’t give iOS apps even the option to run continuously in the background without sacrificing battery life and foreground app performance. But that’s how Microsoft has positioned Metro for tablets — a modern touch interface that carries the full CPU and RAM consumption of Windows as we know it. That have-your-cake-and-eat-it-too attitude is what I didn’t get with Microsoft’s positioning Metro as its answer to the iPad."
This is wrong. From Anandtech: (http://www.anandtech.com/show/4771/microsoft-build-windows-8...)
"discarded applications will continue to stay open as a background application, having all of their memory pages intact but unable to schedule CPU time so long as they’re a background application. They’ll remain in this state until the OS decides to evict them, at which point they need to be able to gracefully shut down and resume when the user re-launches the application. Internally Microsoft calls this freezing and rehydrating an application."
Metro's approach sounds very similar to that of iOS and Android. Presumably this behavior will be adjustable so that background processes can be allowed on desktops without mobile power constraints. This is actually a really smart way to do things. Make how the OS handles background apps a setting rather than hard-coded architecture. e.g. If you're out and about using your tablet background apps get quashed so that you get decent battery life. When you go home and plug it into a dock you can leave a torrent downloading in the background while you browse the web or play games. Best of both worlds.
At no point the article says Metro would not have such a feature. The point is merely that you can't have outstanding energy management with runaway background processes, which is the way desktop apps work today, so having a full-blown desktop running legacy apps on a mobile experience is compromising power management.
Side note regarding "you can leave a torrent downloading in the background": see task completion API on iOS, which fills the use case while eliminating undue battery use.
It’s worth noting that Metro is more than just a new look, and more than just putting touch first. Metro apps have similar restrictions to iOS apps. According to Jensen Harris, for example, Metro apps will get “about five seconds” after they’re no longer on-screen before the system puts them into a suspended state. There’s no file manager. Users no longer quit (or, in Windows parlance, exit) apps explicitly. These tradeoffs sound familiar?
...Did you (and the parent poster) simply read the Gruber article until you could find something to nitpick, then go with it?
Which, again, proves he didn't watch the keynote. At one point they brought up the scenario of using a tablet on the couch and then, after finding a bug, being able to pull up Visual Studio on the same device. Even the tablet they gave out came with VS installed on the device.
Let me show:
after you go back to metro:
dumped back into Metro with no context how you got out.
Thus: feels like a skin or window manager, not the OS.
It's not a direct parallel, but this and the fact that companies like Google have completely upturned the definition of "beta", or well, pre-releases in general, seems to show how people come to their final judgement very easily.
I suppose that could be why Apple doesn't release things until they're well and truly ready for the limelight.
Task switching is done by swiping on from the left or hitting Win+Tab. I just tried it, Windows+Tab brings you right back in the control panel where you left.
It could probably be clearer, but the Windows button takes you to the Windows front page, it's not a toggle between desktop and metro.
I do agree that cohesiveness is important to the user experience. It makes me a little disappointed that the extra step wasn't taken to make the Metro UI live on its own without the full desktop experience tagging along.
To put it another way, I'd much rather have bumpiness shipped on day X rather than a delay to clear up all the transition bumpiness and ship 6 months after X. Although I'd like to see updates that fix the transitions come out a 8-12 months after release.
What would make me very pleased is if they delayed 6 months to get it right rather than rush it out.
It seems there will be a desktop mode for ARM tablets if you look 1:40 into the video here:
The desktop mode doesn't seem to immediately respond to touch like Metro. It will be interesting to see the final product and how well Windows 8 performs on an ARM processor in desktop mode.
And yet his articles on Windows 8 get top billing on HN whereas the actual Windows 8 posts hardly got many votes. That's HN for you I guess.
This is a typical Gruberism of false logic. He starts with the notion that all Apple's decisions are holy and right. He then derives conclusions by extrapolating from that.
There is no reason that tablets cannot achieve fantastic battery life while running background processes. They certainly need to be designed to achieve it. Existing Android tablets support background processes and multitasking and get comparable battery life to the iPad - typically we're talking a sacrifice of < 10% battery life to achieve an incredible expansion of utility. And this is not taking into account the fact that Android is a less efficient OS overall (utilizing less hardware acceleration, running most tasks inside a Dalvik VM instead of native, etc.) Even the iPad itself evidently does background processing as you can have it play music, give you calendar reminders and all kinds of other things happen in the background even when it is in sleep state.
Remaining backwards compatible while picking up such a new model of multitasking is hard. Very hard. Apple made parts of it opt-in with Lion but that’s not the same as requiring everyone to adhere to the model and today hardly any app supports it.
Gruber simply sees this as a hard challenge that is, at least currently, not solvable if Windows is to stay backwards compatible. That’s all. He might be wrong but I think that position is quite sensible.
I have long had the theory that the people at the top who could have done something against that (notably Steve Jobs) simply don’t care about that discrepancy, that’s why it can continue.
One would think that the live tiles with continuously updated information show more "connection" with the user.
It's just a browsable report of the status of things you care about, with the UI more trying to get out of the way, and provide options, than be a "thing" in your world by itself.
On that note, It probably should (and perhaps will) become more subtle over time. I'd prefer it taking the same approach as classic mac os: stay in black & white so that the user's real data is in color. They won't do that, of course, but a subdued palette could really help. Of course, then the individual widgets need to really bring home their content's presentation.
Can you really say that the app above has no personality?
In-depth Metro analysis from a designer if you're interested:
But it won't work, because those apps won't be available on ARM, and even if they were, they wouldn't be designed for touch. So Microsoft is starting from scratch, and this time they have strong competition from both iOS and Android.
In this market, their Windows dominance doesn't matter as much, so they are on equal footing with the others. And I find that very exciting. If you notice, Microsoft is innovating only when it's the underdog in some way, not really when it dominates.
So I hope 5 years from now we'll get to see iOS, Android and Windows with about equal market share each for "personal computing devices", whatever that means 5 years from now.
But you're right that they're exposed with no applications, which is why they're basically giving windows 8 beta/vs11express away to any and all who are prepared to download it, they need to catch up, and quickly
"Furthermore, applications for the ARM version of Windows 8 will only be available through the "Windows Store" - and only apps compiled to use its "Metro" touch interface will appear there."
Virtualization could enable you to run a legacy app, stop its processing instantly, bring up a new app, and save the 'background app' state to storage when it's convenient.
This seems to be where they are heading. I don't know how it would translate to ARM tablets, but intel wants in on tablets anyway.
I think that's a very black and white way of looking at it. Sure maybe MS will rule out C++ x86 apps targeting winforms apis, but there's no reason to assume that they will also exclude C# apps targeting WPF.
So you're saying the iPad is successful because it can't do stuff OS X can do? Sorry I don't understand -- that sounds a bit silly to me. I thought it was the portability and size and ease of access to apps (the ecosystem around the device) that makes the iPad successful. If we could have the hardware power of a desktop system on an iPad, while keeping the simplicity of use, I'm fairly sure we would all like that.
With every step Apple is moving OS X closer to iOS, and iOS closer to OS X. Will they ever combine the two? I see no reason why they couldn't eventually with the amazingly quick progression of the relevant technologies that we're seeing.
Now whether MS is doing the right thing by combining them now -- I'm OK with saying I have no idea until I actually play with the device. Maybe they can pull it off, maybe not.
All of these things are possible because of the restrictions in iOS.
For example, the lack of a filesystem forces apps to have a more simplified data access mechanism.
The good battery life and response time comes from the fact that the OS puts severe limits on what each app can do.
 - There's an escape hatch of course, with something like Dropbox
What?! How are files stored on iOS if there's no filesystem? Maybe you mean no user visible filesystem?
Let me first start by stating that I do believe that touchscreen devices will continue to revolutionize industries, as they already have. But, why in God's name are Microsoft and Apple trying to shoehorn the touch-screen onto the desktop?
Mac OS X Lion is probably the last OS X version that we'll see. With each version it's gotten closer and closer to behaving like iOS. It seems that Microsoft is doing the same with Windows 8. Think about this, the human-device interface with touchscreen devices and desktop is different. The whole paradigm is different. With touch, you have your finger. The other, you have mouse/keyboard. The user interfaces that cater to one don't cater to other very well. Why force it?
I believe mobile is the future. But, I'm not sure this is the best evolution for desktop interfaces.
TL;DR: Metro UI looks nice, but merging touchscreen UI with desktop UI is a mistake... a la same Windows 8 for all devices.
EDIT: Please share your thoughts.
My thinking is that MS wants to sell you a hybrid tablet device. By day, you have it docked, and you spend most of your time in Desktop with mouse and keyboard. When it's going home time, you pull it out the dock, and it becomes a touchscreen tablet where you spend most of your time in Metro.
For example, I love having a touch monitor when I'm reading a long Word document. Why hold a mouse and use the scroll wheel as I read a 35 page spec when I can sit there casually and flip through the document as I read? Or, if I have two hands on they keyboard typing and need to switch to another window, it can be nice to just tap the screen, rather than grab the mouse, navigate to the correct place on the screen and then click the button. Touch isn't perfect for everything, and I use the mouse plenty, but it's nice to have both. The key for me, however, was that I initially had to force myself to use the touch screen - I just wasn't used to it. Once I got in the habit of using it, however, I've found it has it's place and can be nice.
Right now, from where I'm sitting, I can't touch my desktop monitor without leaning forward uncomfortably. Even if large touchscreens become cheaper over the next few years, my arms aren't going to get any longer, and my field of vision isn't going to get any wider (larger monitor = sit further away). In this situation, touch isn't simply imperfect, it's physiologically impossible. It'll be even more impossible if your "screen" is a 50" plasma TV on the opposite wall.
So it seems that @jonpaul does have a point.
I agree though, that trying to mix both is a mistake (one that's been tried for 10 years without much success).
Metro looks like it competes with iPad and the rest seems to be a version of Win7 under the hood. It feels to me like cmd is to windows as windows is to metro; something under the hood for power users.
I can imagine taking my computer on the bus and reading hacker news in metro and then when I get to work I plug into a dock and open visual studio. That seems to be the vision and I think I like it.
I'd have to control what was going on in the background of my computer when I was undocked, but if I'm enough of a user to set up background server like processes then I should be clever enough to understand that heavy background processing will eat my battery if I don't act responsibly when I unplug it.
The new more powerful ways to do diagnostics are exactly the type of tools I'd want to be able to control power; so it really feels like MS has a similar vision.
MinWin was a kernel cleanup effort that was suppose to be part of Longhorn and then Win7. If MinWin is finished in Win8 then Win32 would just be a subsystem along side Metro and neither would be dependent on the other or necessary for the other to operate.
I can't find the reference but I remember reading somewhere a few months back that Win32 may not even be installed by default in Win8 and that it would only be installed when you attempted to load an application that needed Win32.
Also a few questions could be answered by downloading the Developer Preview. The Windows 7 desktop still exists, (thankfully for these of us that need to get our jobs done), but how a user will use Metro on a desktop or laptop will probably be dependent on their tasks and desires. As database application developer and admin, I'll be sticking to the desktop and the CLI a majority of the time.
I love Apple, but I really respect what Microsoft is trying to do here. The whole "You did great, kid, but maybe next time..." routine is a little much. It smells like a smear campaign more than an inquiring mind. This article stinks of fear.
Tell that to Aaron Barr. I know the iPad wiping story isn't confirmed, but it would be pretty easy to refute were such a thing impossible.
Video back from January: http://www.youtube.com/watch?v=rvzJmRBS84w (watch from 2 mins in).
At the same time Intel is trying to make mobile chips, so it might just be that the mobile version of Windows 8 will be Metro-only, regardless of whether it's running on x86 or ARM.
The ability to run an old win32 desktop crap.exe is what Windows is all about. Only a complete idiot will choice it as a platform for a new, build from the ground up project, or, god forbid, a server.
And there is enough ways to run a web-browser, especially plug-in-less one. It is called Android. ^_^