I worked at the Post-PC lab in Colorado for a while. The premise, at least when the term was coined, wasn't ever that Desktops or the new (at that time) "Pen Computing" interfaces would ever go away. Rather it was always that "computerization" would invade all walks of life until suddenly everything would become computer.
Much of the research we did was looking at life would be like when suddenly everything you interact with would have a programmable interface, rather than only being able to program things through your PC.
30 years on much of that has turned out to be more or less true with cars, lightbulbs, door locks, and so on. But of course like all "predictions" of the future it's made using the terminology that was in vogue at the time and sounds funny to us now that it's here and other terms like IoT eclipsed their popularity.
Actually this traces back to an aside by Steve Jobs, where speaking of the iPad he said "we think these are post-PC devices, we really do".
This was immediately taken to be a statement that he felt that PC was going away, rather than that the devices were not trying to be PCs in a new form factor, but something post-PC.
People should have gotten a clue when Apple did not stop selling laptops and desktops.
But Apple has reduced their R&D investment in laptops and desktops, at least on a relative basis. They haven't updated their "pro" desktop line in years. I suspect they will eventually offer a more powerful line of iOS devices to address content creation use cases, and gradually phase out the legacy MacOS product line. Unifying their product lines around a single OS would eventually deliver a lot of savings.
I've been waiting for a while for someone to call this.
While consumers may have moved on to do most of their device time in front of a phone, tablet or phablet, the rest of us still spend many hours a day in front of a computer.
There is still a sizable functionality gap between a tablet or phone, and a laptop or desktop computer.
Banks still use mainframes, and mainframes still have capabilities that laptops generally don't have. Neither argument means we're not in a post-mainframe era.
Today the majority of computer & internet use happens on a phone. But it's still a small majority, so we cannot yet call it a post-PC era. If/when that turns into a substantial majority we'll be in the post-PC era, even if you and I still primarily use PC's.
The difference is that PCs and PC-based servers really can do everything mainframes can do if you port your software. Phones cannot do everything a PC can do, both for hardware reasons and for OS lock-down reasons (the latter is a bigger barrier for iOS). The mainframe/PC = PC/mobile argument is a piece of faulty reasoning by analogy. PC/mobile is more like truck/car than horse/car. Cars displaced horses for transport, but cars do not displace trucks.
What we are seeing is specialization. Phones are used for short transactional interactions with larger systems, texting/chatting, and casual reading. PCs are used for more prolonged deeper interactions including virtually anything creative. The latter may seem smaller than the former when you go by usage logs alone, but the latter is more important and is not going to be displaced unless phones can grow to do everything PCs do. In that case you no longer really have a 'mobile device' in today's mold. You have a tiny form factor ultra-portable PC that can be used as a phone when not plugged into a monitor.
Going with the car/truck analogy you might say that prior to mobile there really wasn't a "car" in computing. There were trucks used as cars, but nothing truly ultra-portable designed specifically for lighter more casual interactions. Mobile exploded because it filled a missing niche.
Note that mobile hardware may "eat" PC hardware in the sense that ARM64 could eat x86-64 eventually, but that will be an ARM64 running a PC OS. It'll be a higher clock speed ARM64 with more cores, more cache, and more cooling too.
" PCs are used for more prolonged deeper interactions including virtually anything creative"
Sure, but that's far from the primary use for PC's. There's still much more consumption & non-creative business usage on them. If creative pursuits were the primary usage then their usage would be low single digits and we'd be well into the post-PC era.
Good question. I used to sell AS/400's, which aren't quite mainframes, but fit the typical picture in your head (green screen, dot-matrix printers, high price tags and all that).
1) Stability.
That stuff changes VERY rarely, and when it does it tends to be very backwards compatible. Our software had binaries compiled in the 80's still fully operational.
2) Reliability.
You don't reboot the server. You don't wipe it and reinstall it. You don't even upgrade it or replace it that often.
You can do things like add or remove drives from your RAID array while the machine is still at full utilization.
Heck, IBM AS/400's will detect the error, order the part, and get an IBM guy there in the morning to install it. You just have to let him in the door.
In fact, you can hot-swap things like the RAID controller card itself, which is WAY not something you can do on x86.
They last forever, too. It wasn't uncommon for us to have machines in daily use for 10+ years without a single bit of maintenance or an IT guy even looking in its direction. This is for a business critical machine, mind you.
3) Single vendor
Ever buy a Dell or HP server and have an issue, and you spend 2 weeks trying to get HP/Dell vs Microsoft vs some other company to admit fault and fix the damn issue? Then you find it online yourself because that's nuts? Well, IBM is responsible for hardware and software on these machines (besides what software you put on it) so if it isn't working, you call them and it's their damn problem to get it sorted. Kind of like Apple, except they'll send as many people as necessary for as long as necessary to sort that out for you.
4) Entrenchment
It works, why bother changing it? It doesn't even slow down as it ages, either, since you aren't upgrading a GUI OS like Windows or whatever.
Now this sounds fascinating about the stability and reliability, but it leaves me wondering how such decent systems can from the same company which makes the horror that is IBM Notes...
IBM was more like a federation than a single coherent entity. System/38 and AS/400 evolved in Rochester MN, mainframes in White Plains NY, and PC software all over the place. None of the divisions had much to do with each other.
Great points. I'd like to add that IBM's latest mainframe is the z14 (z14 seems to be the name of both the CPU and the mainframe) and it's pretty interesting:
RAID-like memory: tolerates one - or more, IIRC - failed memory modules without stopping the process using that memory.
Failure prediction and recovery: if a core presents correctable errors, it's workload is moved to a different one. The process never needs to know what happened. In the case of mainframes that have spare cores, an inactive spare is brought online automatically.
Bunches of smaller computers to offload everything away from the CPUs - the main processor rarely bothers to do any IO - it's all done from separated computers with their own memory, IO channels and so on. Even powering up the machine is, sometimes, accomplished by a secondary computer that sets everything up before the actual machine is powered up.
There are a lot more nice things mainframes do that x86 boxes simply can't.
* have each component twice to have high availability inside a system (not confusing with internet-era high availability where you have multiple machines)
In this day and age, x86 hardware is good enough to run many IBM mainframe apps through the Hercules emulator. But if you do this, you run the risk of running afoul of IBM's patents and being threatened by their lawyers. IBM has made a promise not to enforce patents against open source software, but is only willing to uphold that promise as long as you don't cut into their bread and butter which is mainframe support.
I mean, who would have thought that in a free market you'll get groups of people preferring different types of devices?
I'm actually really concerned how around Apple and similar devices there's this strange groupthink which says that if a device only caters to less than majority of people on this planet, it gets mocked like it shouldn't exist.
If you're optimizing purely for maximum ROI, then that's the correct product decision.
Whoever achieves maximum ROI attracts the most additional capital investment, which enables them to lower unit costs and continue improving that ROI further, thereby gaining even more capital, etc.
EDIT (since I'm getting downvotes): to be clear, I'm not advocating this particular "maximize overall ROI" decision as a moral good. I'm just pointing out that, empirically, this is what the market is actually optimizing for, irrespective of my feelings about it. That's the reason why they do it; it's not because they're dumb.
Intel etc is not making money because I use a PC they make money when I buy a new one. I have been upgrading my tablet more frequently than my PC in large part because I spend so much time on it.
Multiply that shift by 1+ billion people and you start talking real money.
On the other hand, many people bought some discount Kindle Fire and that's their tablet until it breaks. The reason Android tablets died as a market is it turned out their replacement rate was not nearly as high as smartphones, so investment in new models stopped.
I've been calling it since 2008. I never saw it happening for numerous reasons, chief among these being the intentionally crippled nature of mobile OSes.
I'm glad others are realizing this. I remember a few years ago when people everywhere quacked "mobile is the future" in this weird almost brainwashed sounding way. I heard this a lot talking with CXO level people in companies, and then I'd look around and see that 100% of all real work was taking place on PCs. I kept my mouth shut of course. I've learned over the years that computing is extremely faddish and criticizing an in-vogue piece of fadthink is absolutely futile.
Except it has happened. It used to be that, for 100% of computer-using people , switching from entertainment (games) to something creative in the computer (making games ) was a download away. Nowadays a miniscule amount of people uses a computer in "work mode" (i.e. not a laptop).
ok. lets except music. touch interfaces naturally lend themselves to it. then again i m not reeally sure you can have a finished, mastered song using an iOS app.
Finished, mastered songs have been done on much less than the iOS version of GarageBand. By people you've probably heard of, too. Twenty, thirty years ago I might not have killed someone for a copy of iOS GarageBand, I would have been willing to seriously maim.
I find it interesting to consider the implications of the quality that comes out of mastering on iOS vs a real studio (where you have a trained engineer and dedicated equipment): the former is of lower quality, but it doesn't really matter because we've become of culture of "good enough" that settles for sub-par achievements. I would say that falling standards have made content creation more prolific, to the disappointment of those who care (and not just perfectionists). A culture raised on overcompressed and pitch fixed productions just won't care about dynamics and good mixing. To quote a friend when discussing food: "If all you've ever had is Velveeta..."
but it doesn't really matter because we've become of culture of "good enough" that settles for sub-par achievements
Congratulations, that strikes me as one of the more elitist things I've read in a while. We are not a society of "good enough", we are a society that decided we don't need to go to the high priests in order to cut a demo. Because what you might call "quality" others might call "over-produced". Because some people would rather produce quality music instead of twiddle knobs. All kinds of reasons that GarageBand is adequate to replace a room full of equipment, if for no other reason other than every kid with an iPhone who wants to be a rock star can pretend to put an album together (which I'm sure will be of basement quality).
A culture raised on overcompressed and pitch fixed productions just won't care about dynamics and good mixing.
And how are they supposed to learn that for less than $200/hour? If only there were free software...
(And don't for a minute think I'm denigrating the importance of quality studio work. It's just not always necessary.)
> A culture raised on overcompressed and pitch fixed productions just won't care about dynamics and good mixing.
I think you are harming your own argument here. A lot of expensive gear and studio time is what created the over-produced, over-compressed, pitch-fixed productions you deplore.
In the less-is-more spirit, a four-track tape machine is all one needs to make a good demo (not even that, really). GarageBand is more than enough for a lot of music - in most cases, problem exists (and is fixed) between the touchscreen and chair.
Now, personally, I haven't found any iOS music software that I'd make music with - but not because of its technical capabilities; it's the UI that gets in the way (and lack of VSTs that I love). But I can absolutely see how great music can be recorded on this platform alone.
There are so many musicians with racks of very expensive equipment and who mix and produce in the world best studios yet their music is shit (in my beholder eye).
And then you have a 15-year old putting an amazing song on SoundCloud, with atrocious mixing and mastering.
That was the whole idea with the Ubuntu phone -- convergence. I don't know that it will ever come to pass. No matter how powerful the thing in your pocket is, I have a box next to my desk that can hold 50 of them. And we're always going to find ways to use more power.
> No matter how powerful the thing in your pocket is, I have a box next to my desk that can hold 50 of them. And we're always going to find ways to use more power.
Yeah, but convergence could mean transferring a memory image to the dock's larger RAM, transferring control to a beefier processor in the dock, and then halting the phone's CPU. In a docked state, the only phone hardware in active use would be it's storage.
Convergence should be more about seamlessly transferring state from one device to another than using one device with different peripherals for all things.
I think the idea that Kurzweil and others have put forth is that your device state will be stored in the cloud and the devices themselves will be so unimportant as to just be laying around or available wherever you go. You could just walk up to any terminal or grab a phone from a bin at the grocery store and your biometrics would instantly identify you and you would pick up where you left off.
Note that Samsung was neither the first nor the best to tackle this problem.
Motorola did it arguably "first" with the Atrix and the BIONIC on Android back in 2011. Once Android 4.x hit, they were able to remove the trashy custom Linux it ran and show the Android tablet interface when plugged in that way.
Microsoft, more recently, did a proper PC-on-a-phone implementation with Continuum on Windows 10. Pop a Lumia 950 or an Elite x3 into the dock and you have a Windows PC. But minus legacy apps, of course. This was the "best" implementation I've seen.
The commonality with both of these, is the hardware arrived ahead of the software being able to carry them. Android wasn't ready to support the Atrix at the time as it didn't even have a tablet UI at the time, and UWP apps weren't developed enough to equate to a "PC" either.
I'm really hoping for a version of macOS that could support a complete embedded iOS in it, electrically, there is no reason why I cant fit the processing power I need in my computer, in a phone or tablet - but as you correctly point out, the software still has no arrived.
(I speak to macOS, because I'm an Apple Ecosystem guy - the same approach could be taken by microsoft)
I do feel like the first company to nail real convergence in this space is going to own the next decade. Apple's definitely been playing with integrating MacOS and iOS better as well. Google's been pushing Android apps into Chrome. While Microsoft's mobile OS is on hiatus, they've been pushing Windows integration with Android.
I’m hoping that the new stuff in Mojave and beyond that brings iOS apps to MacOS will eventually do that. One app with both iOS and macOS interfaces.
iOS interface by default, but connect your phone to a screen and hook up a keyboard & mouse and it switches to desktop mode. Mojave’s dock even works the same as the iPad dock (or vice versa).
My guess the first to actually accomplish it will be something like making your android phone go into chromeos mode, since it already has a desktop userbase and would be the least amount of work to transform phones into supporting.
Microsoft continuum was a flop for lots of reasons.
Windows Mobile 10 was a flop; a niche feature on a niche os isn't going to sell phones, unless it was fabulous.
The 'desktop' experience wasn't a real desktop, it was UWP, and if I recall correctly (??) didn't offer real multitasking with overlapping Windows and stuff. The amount of applications available for UWP is not very large, and the number that supported continuum features was a small fraction of that.
If a desktop environment doesn't have applications, at least it should have a good browser. This had Edge, which might have a great rendering engine, but has a very slugish frontends and is awful to use.
The clearly obvious next gen hardware for this (Intel x86 phone) was cancelled around the time of the release.
None of that has anything to do with if the core concept is good or not. That there haven't been a lot of other attempts may be more damning. I think the opportunity is really that more and more people only have a phone, not a laptop or desktop; from time to time, they could use a larger computing environment -- is there a way to provide that, hook up to a tv/monitor and a keyboard + mouse and provide something useful.
I kind of want the same thing with a smart watch. I want my watch to be the entire phone, and then I want to be able to link a handheld phone-like display that I can keep in my pocket.
I find the authors claims odd. Yes, people who used computers for work before everyone had a smart phone and tablets were plentiful still use computers at work. That hasn't changed. But what people use at home for things that are not work has most definitely changed. I know people in their 20s and early 30s who do not have a computer with a keyboard - and if they do, I've never seen them use it at home. In 2008, I would expect most people in their 20s and early 30s to have a laptop at home, that they used for basic internet stuff. I don't expect that anymore.
The author seems to think only about professional activities. But from what I have seen, home consumption and day-to-day activities have moved off of "computers" and on to devices like smart phones and tablets. This is true for me personally: I have a work laptop, but I don't have a functioning one at home anymore. At home, I only use my tablet and smart phone. Years ago, my personal laptop broke, and I realized there was no point in replacing it.
I think this is also why tablet sales have slowed down in recent years -- once you get to about 2013/2014, the benefits gained from buying a new tablet basically leveled off for most users, when the average use case is email, web browsing, watching video, and maybe some light gaming. The same thing happened with cheap laptops in the early 2010s.
From who's perspective ? From Steve Jobs and Consumer? Of course we are in Post PC era. There are roughly 300M+ iPad in active uses, more than double of Mac users in a much shorter time frame. You want everyone to owe a PC? Now lots more people are accessing the internet via mobile.
The post Pc era Steve said never meant it will completely replace PC. He said it himself in I think D8 interviews, where PC are like trucks, they are useful, they won't be gone, but we just need much less of it.
And that "steady" PC shipment mentioned? You should thanks PC gaming for that. And especially China. At peak there were over 100M Chinese player playing PUBG at the same time.
The problem with tablet right now is that Apple did little to nothing making iPad with Keyboard usage better. The UX for lots of Pc tasks, which are answering email, excel, are sub optimal, especially so for those who's job are mostly those tasks.
It depends on how you define post PC with all seriousness. Some people already use only a phone or tablet as their computing device. We aren't even post mainframe era shockingly as some still use mainframes instead of server racks. And that is actual ones not virtualizations of ones with proven but basically unmaintainable software.
The author is delusional. Post-PC doesn't mean "buy iPads". The nail in the coffin of PCs was placed with the release of EC2 2006 and hammered by the 2007 iPhone release.
PCs today are a dead-end legacy technology. All of the PC technologies on the edge (things like enterprise laptops, POS terminals, etc) are being replaced by mobile or virtual devices. Companies are getting sick of the IT business and are increasingly allowing BYO devices. The legacy Windows stuff is going to get delivered with remote apps, virtual desktop, etc. It's already happening.
How do you justify spending $750-1,000 every 36 months on device replacement when you can spend $20/month for amazon workspaces and buy a cheap thin client every 8 years or have employees BYO?
If we weren't in the post-PC era, Microsoft wouldn't be porting their entire stack of Office products into web delivery platforms. When was the last time you landed a new fat-client application at work?
On the one hand the article is the typical US-centric nonsense. Anybody who's actually observed how the "other 7.3 billion live" would grasp very well that not only are we Post-PC but that on a deeper level there never really was a PC era. The PC was a kind of aberration, a 40-year blip, an unfortunate accident that only arose due to a bizarre confluence of events like disco music and Texas.
Global corporations like Google and Microsoft are relentless mobile-driven because they don't have luxury of naively focusing on a single market. And when you look at all the fastest growing tech companies in the world, from Instagram to Wechat to gaming, what you are seeing is a landscape where virtually all speculative capital is invested in mobiles.
That said, the PC as a platform is seriously under-utilized. The PC could be so much more but, rationally, nobody is ever going to invest in a powerful PC if all they need to do is run chat and watch videos and fill out forms. This is the irony at the heart of the growing division between the technology masters and those who serve technology: the people who create the powerful mobile platforms and ever-more sophisticated algorithms are all using PCs.
Somewhere, underneath all the spreadsheets for moving money around, and underneath the middle managers writing memos in Word, are the actual workers that read the memos and do the work that goes in to the spreadsheets. The workstation market was the first big application of one-user computers, and will probably be their last foothold.
Reading memos is best done on a tablet or smartphone. Doing the work depends on the kind of work, only content creators and programmers need a full PC; everyone else who works in the non-digital physical world, doesn't.
I see no aspect of reading memos that is better on a tablet or smartphone than it is on a PC. However, I do see some ways in which it is inferior on a smartphone - screen size in particular. So "best done on a tablet or smartphone" seems like a tremendous overstatement at best, and flat-out wrong at worst.
The phrase "Post-PC" seems overloaded. Is convertible tablet/laptop a PC? Is a chromebook a PC?
I'd say about 3/4 of my working days right now I use my laptop just for SSH and a web browser. The big distinctions for me between a machine that I could use most days and one that I couldn't are the keyboard, and the general fit and finish of the hardware.
From a Microsoft definition of "Post-PC", where PC means wintel, that's very Post-PC. From the perspective that I think Steve Jobs was coming from, I'm less sure. I haven't looked closely into iPad keyboards, but I only know of one professional coder that uses an iPad, and his workflow looked pretty painful to me.
If I stuck you with some insane compliance requirements and forced you to deal with an 12" iPad with keyboard, you'd be ok in that workflow -- which is one of the more challenging iPad workflows.
Typical consumer and business task-user workflows are easier on iPad or Android tablets.
From a business POV, the modern stuff is set and forget. It's really difficult with iOS in particular to really fubar your devices.
Much of the research we did was looking at life would be like when suddenly everything you interact with would have a programmable interface, rather than only being able to program things through your PC.
30 years on much of that has turned out to be more or less true with cars, lightbulbs, door locks, and so on. But of course like all "predictions" of the future it's made using the terminology that was in vogue at the time and sounds funny to us now that it's here and other terms like IoT eclipsed their popularity.