The whole "everything in the cloud", "let someone else worry about storage" view, at least the way he promotes them, are more like VCS than let's say Git (or pick another DVCS)... you have a conceptual "central" point of failure, even if this "point" is a network of servers distributed around the world.
I want STORAGE ON EVERY DEVICE (not volatile!), and and automatic system to sync it with all my other devices, WITHOUT NEEDING THE CLOUD, just set up and ad-hoc mesh network and sync everyth (yeah, there's gonna be smth like "merges" for OS settings and music collection changes but I can do with that). The "cloud" should be just infrastructure, nothing else added, and I shouldn't be distrupted when my connection to it fails... "Always connected"? No, no, no, I'll always want to be able to work offline and be able to sync/merge/push/pull even my OS, its settings and software (and be able to "branch" my and keep multiple versions of software and all that).
DVCS should be the models for how to do everything in the cloud, with simpler interfaces for different level of user needs/competency.
Rob Pike's ideal of "homogeneity" in computing really misses the distinction between distributed and central syncing, the security and reliability implications etc. ...and large local storage capacity and "enough" computing power on all devices is needed for this. I'd rather be "part of the mesh" than "connected to the cloud mesh", because I think the distinctions are important and they require different things from "client devices" (all devices should be "clients"! no servers "in the cloud" for me please!)
Frankly, I can't understand how can you criticize Pike for that. That's exactly what he wants:
When I was on Plan 9, everything was connected and uniform. Now
everything isn't connected, just connected to the cloud, which
isn't the same thing.
By the way, try Joey Hess' git-annex, now with the assistant. It's just that: a sync and merge system for all your files, which can automatically sync up just about anything (computers, USB drives, cloud services, etc) and doesn't rely on central services (the Assistant will use XMPP to broadcast messages between clients, and then they connect directly to each other).
...but then he goes on with "no disks, no state[…] entirely in the network" ...what I meant was that you need "thicker" clients to make it work, because a real "thin" client implies being tied to a cloud and not part of it and this is what I argues, the he conflates two "cloud views" bi not considering the implications of really thin clients like he want them to be.
I love git-annex. It is worth pointing out that the XMPP notification is still under heavy development. I don't want to turn people away from git-annex but I also do not want people to get their hopes up for a feature that is not complete yet: Joey described the xmpp work in his most recent blog entry and said it is "really the last big road-bump to making it be able to sync computers across the big, bad internet."
This basically describes my use of Dropbox. Anything important to me that's not code is stored in Dropbox. Every computer I have dropbox on and which has 100GB free, has a copy of those files so if Dropbox vanished, I still have a replicated copy of my files on one of my machines.
I never have to backup those files because they're replicated to any running machine with Dropbox installed. One of those machines has my dropbox share on a Drobo, so I'd need a catastrophic failure of my Drobo disks, Dropbox and my laptop in order to lose those files.
Dropbox is software, and like any software, it has bugs. One of those bugs could easily result in "delete this file and then propagate the deletion to every client". Granted, Dropbox has some rudimentary versioning, so it's pretty likely that you'd be able to recover, but still, I wouldn't advise anyone to use Dropbox sync exclusively as a backup solution.
Well said. Happens to me every couple of months, that I know of. Heck Dropbox might have just deleted a file from all my computers right now, how would I know? Last time it happened the deleted files didn't even show up in the Dropbox event log, but they did show up as deleted files. The only way I know it happens is when a build fails -- shame I don't build my documents. I wish I'd never started using Dropbox, but now I'm too busy to switch my workflow off of it.
> I never have to backup those files because they're replicated to any running machine with Dropbox installed.
But you really should have a separate actual backup of your data though. What would happen in the scenario that somehow your drop-box data is deleted ? Won't this deletion also be replicated to all your synced devices ?
...I know, we geeks have this already as we use dvcs for all sorts of stuff other than code too (like my text editor's settings etc.)
...but most other non IT professionals using computers (doctors, physicists, most non-software engineers etc.) don't do this and are pretty far from using things this way ...they are more likely to be "fished into" cloud solutions that will have them "hanging from the cloud (someone else's cloud)" than actually be "part of a/the mesh/cloud" ...it's easy for us to forget how things work for "the rest of world" (and I don't mean computer illiterate or dumb people, just smart persons that don't happen to be neck deep in coding or other IT specific activity...)
I agree, for me cloud is just another backup storage, as it surely isn't - and never will be - fully reliable. Sure, it's convenient that you can access your data from any device, anywhere, but the price of not having direct control over your data and relying on internet connection to access it disqualifies cloud as "one and only" storage for me.
Slight critique here - I feel like he's using this to push an agenda rather than just telling us in detail what tools and software he uses to get his job done.
He mentions a few things but in far less detail than anyone else I've read on usesthis. Instead he gives paragraph after paragraph of "world view" about moving things to the cloud...interesting, yes, but not what I'm after when I read usesthis posts... :-/
He's not pushing for a move to the cloud. He actually did criticize the way we're building the cloud today:
> "This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary. I sorely miss the unified system view of the world we had at Bell Labs, and the way things are going that seems unlikely to come back any time soon."
To understand what he's talking about, keep in mind that Pike was a lead engineer on Plan9, a project aiming at creating the exact environment he's describing. And although the project never left the lab, it did affect the lives of the scientists working there. There are many reasons why Plan9 never went mainstream, but one thing is sure it was not technical inferiority.
I don't think Pike is pushing for any agenda. I tried Plan9, and I see what he's talking about. Lack of widespread support and the relatively small number of available apps make it difficult to be the "primary" OS I use, but the design concepts like "everything is a file", union mounts or 9P are amazing and work like he describes.
When a world renown OS scientist is asked about his "ideal setup", what answer would you expect? "I wish I could get the latest version of MacOSX and the newest chip by Nvidia"??
The problem with asking Unix users about their tools is that you'll get a long list, and the interesting part is more about how they connect them. While I personally would be interested in a long treatise on how he combines shell-scripts and plubming actions in acme to get his coding/documenting/project management work done, this is probably a bit too long (and maybe technical) for a usesthis post.
It's certainly easier with your average Mac user's "I write my code in TextMate, use Things for GTD and manage my foodie photo library in Sepia Extreme".
If it had all of those things, it wouldn't really be Plan 9 any more.
Grab a decent Linux box, shove it under your desk, and use VNC or X forwarding to display those programs on your Plan 9 terminal. You can keep your storage on a Plan 9 file server, then have your Linux box mount that (9p support is in Linux).
Or, if you're willing to put up with some slightly outdated software, Linuxemu can be a decent choice.
It's doable. I did an awful lot of my graduate work sitting at a Plan 9 terminal, from writing code to just browsing the web with Opera.
A separate, contained server to run programs that can't talk to anything else is sort of against the whole Plan 9 idea, it's basically what Pike is complaining about here. All you did was turn a Plan 9 into a dumb terminal and NAS. Why would I do that when I could ditch the Plan 9 box and save money just by having the Linux machine, which apparently was needed for running everything in the first place.
The idea of Plan 9 was not to have no user applications running on it.
I imagine that Dr Pike spends most of the working day in the text editor that he wrote. He stated that he adds as little as possible to a stock Mac OS X install so that he can change the computer he uses easily.
The worldview bit was interesting because Dr Pike regards the way we access computing power as important and that the modes of access we have available may influence our lives. I think that he is right, but I prefer to have control of my stuff, so I tend towards synchronised local storage.
I liked it. It's nice to see a guy's thoughts on how he thinks computing should be. Especially if its someone like Rob Pike. (Better than the usual stream of Macbook-Pro-with-24"-Cinema-Display which is what everyone seems to be using)
My gut reaction was the same, I'm more interested in the technical solutions the experts have come up with. The cloud idea is interesting, but I've already got git and vim everywhere so it's not a radical idea for me. I wish he had gone into more detail regarding acme, I bet that would be fascinating.
I can't get behind the idea of every computer I interact with being a dumb terminal. I don't want to assume I'll always have a connection to the mother machine. Even in some ideal world where I always have a high speed connection that never fails, can you guarantee that the server itself won't go down? Not really.
I would like to see a nice balance: I basically work locally, and anything I do is synced ASAP to "the cloud" and thence to other devices (and other people). But if I'm in the hills or the server goes down, hey, I still have a perfectly good computer in my pocket.
I agree. In addition to the concerns you brought up, I see other issues with Pike's proposal. If computers are infrastructure, I'm forced to deal with whatever computers others have. This is a problem, because most people don't use computers as much as me and aren't willing to spend as much on them. I'd rather carry a maxed-out 11" Air everywhere than put up with my friends' and relatives' computers. (In fact, I do this.)
Pike doesn't address security either. Would you log into Gmail on a public computer? Would you enter your credit card info or visit your bank's website? I sure wouldn't. Phones from 20 years ago didn't have any storage or computational abilities, so these security concerns didn't exist for them.
While I'm a huge fan of cloud-based storage and synchronization, there are just too many issues with using untrusted computers everywhere I go. Maybe it was due to the interview format, but I'm surprised such an intelligent, technical thinker managed to avoid discussing the downsides of his proposal.
As you noticed yourself, the "dumber" the devices are, the more secure they will be. I'd happily log on to my e-mail account from a device that (I knew) booted an OS image from ROM just before I used it (comparable to those live CD-ROMs for secure applications like e-banking).
Let's go back to burning ROMs for OS upgrades instead of flashing/storing them on disk! People can replace batteries, so they shouldn't have trouble replacing an OS ROM ..
Batteries aren't static sensitive, and ROMs aren't available in gigabytes, but apart from that there is a certain appeal in a more secure OS environment.
It's a shame that (for example) Windows doesn't make it easier to make the entire system partition read-only, but applications are traditionally so abysmally poor at handling "no you can't write THERE" errors (yes, Photoshop, I'm looking at you) that most people gave up. UNIXes are better, but still at the mercy of morons (Acrobat has managed to drop files called "C:\nppdf32Log\debuglog.txt" all over my home directory, so it's probably just as well it's too fucking stupid to realise it's running on RHEL otherwise it'd be asking for root all the time instead)
True ROM chips are by definition available in sizes slightly larger than similarly priced DRAM or SLC flash, it's probably cheapest memory to manufacture. And you would be surprised about how many megabytes of ROM are contained in Android/iOS devices (baseband code is often i mask ROM on chip, modern SoCs tend to have some kind of support core with code in mask ROM and also some ARM acessible mask ROM with things like "low-level" bootloader, if you count reading files from filesystem on MMC and relocating ELFs as "low-level"), even desktop PCs contain significant amount of mask ROM (CPU microcode, sometimes embedded controller code, microcode for various other chips, probably bootloaders for HDD controllers). Bottom line: flash is still more expensive than mask ROM, given enough volume (and by definition will always be).
I don't think these are widely available ... Practical solutions for Windows would probably be DVD-ROMs or read-only NAS filesystems, or (for reasonable speed) SSDs, preferably PCIe or Mini-PCIe with hardware write-protect switch.
"...can you guarantee that the server itself won't go down?" No, but neither can I guarantee that the hard drive in that Macbook you've been abusing for the last 3 years won't suddenly fail and wipe out your work.
One nice compromise that I'm sure Pike has used is to have your "dumb terminal" be a laptop with Plan 9 installed. When you boot a Plan 9 terminal, you can tell it to get the root from the network or from the local disk. If you select one, though, it's quite easy to also make the other available, and sync your data back and forth.
"...can you guarantee that the server itself won't go down?" No, but neither can I guarantee that the hard drive in that Macbook you've been abusing for the last 3 years won't suddenly fail and wipe out your work.
Of course not, that's why I want both. Having to choose which version of my files to use is annoying, manually syncing or having to write/install something to do it automatically is worse. You've definitely lost "the normals" at that point. No, just proactively sync my changes.
You missed the part where he qualifies almost every statement about getting rid of the local disk with, "except for caching". This implies that he's not talking about a dumb terminal.
If your device has frequent (not necessarily constant) internet access this is essentially the same thing as having many regular computers, since your cache can allow you to work offline just like normal. Like git, you would just work on the local "cache", and then sync with other devices when you need to switch or regain internet access.
I didn't understand his use of the word caching to mean the same thing as a local git repository, but rather a simple latency optimization; SVN has a "cache" too. He also states that most if not all real computation takes place off the device, in the server. If that's not the definition of a dumb terminal, what is?
If I misunderstood him, well, good, I'm glad we're not so far apart. But I don't think so.
AT&T did a very good job making the phone system widely available and phone equipment very, very reliable. I'm sure they would have done the same for the Internet, given the chance.
Obviously things didn't work out that way, and now it's actually more reliable to carry around expensive and rather-easily-breakable computers with you. It's kind of surprising if you think about just how fragile a laptop is.
> When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.
For me, Dropbox has brought me that.
I still use Github and the like, but my computers now share such a similar setup (all Linux, install Go, install Dropbox, install Sublime Text 2, done) that I can walk out of the office without doing anything special to my machine, go home and pick up literally where I left off.
My git repositories are cloned into my Dropbox folders so that when I move from one place to another but am not ready to check in (local branch in state of flux) I still have that in multiple locations.
As Sublime Text 2 stores the project and file info in a plain text file, that state comes with me too.
My $GOROOT is also in my Dropbox folder, so if I've grabbed something via "go get" that also follows me around.
I view Dropbox as an ever present working cache, not as storage. Things like documents are in Google Drive and accessed via the browser.
On Friday I went to a meeting at 3pm that I thought would just be 20 minutes. It turned out that it took 3 hours, and I hadn't closed ST2 or anything I was working on... no problem, I went home instead of back to the office and my work was exactly where I left off with the same files open in ST2.
I think the only thing that doesn't follow with me are the undo buffers in ST2.
There is an important difference though. You are sharing your data, but you are not sharing your programs and their current setup. That is, if you forget to save your file, it won't be on Dropbox. Plan9 is way more persistent here, as it has a lot of tools which can be brought back to the state you left them in on one machine.
It is awfully nice to have a persistent environment. Dropbox is definitely partway towards that goal, but it doesn't hammer in the nail fully.
If you forget to save your file, it won't be on your Plan 9 file server either. The only program which has the state persistence you describe is acme which can dump its state to a file and load it again later, but most every editor on every operating system can do that. And you have to run the Dump command manually, so if you forget, you lose your state the same as you would if you forgot to save before unplugging your terminal.
In OSX Lion, Apple released Auto Save and Versions, which does exactly this. Except that just a tiny amount of applications I use in my daily development life does this. But in casual use, it's actually quite nice. If the application implements the whole set of Lion auto saving / state API's, you don't even have to save the files for them to remain available.
I can write on multiple unsaved documents with TextEdit, close TextEdit, and all the documents open up in their unsaved state as I open TextEdit. It's quite nice. But to have these features, requires too much of a workflow change to what people are used to, and the LaLa Land of it breaks down right after the user hits an application that does not implement the system, and loses their data (by clicking on don't save or assuming that everything is recovered after a crash).
Rob Pike doesn't want a dumb terminal. He wants a powerful new infrastructure.
My dream setup, then, is a computing world where I don't have to carry at least three computers - laptop, tablet, phone, not even counting cameras and iPod and other oddments - around with me in order to function in the modern world.
I can relate to what he wants: he cares about his data, not about what his data is on. Every time I upgrade machines or move to a different machine, I have to either reconstruct my environment or I have to tolerate an absence of some data that would be nice to have. How amazing would it be if I can could use any 'terminal' anywhere and have complete access to all of my personal data without having to tote around a physical piece of hardware that's 'mine'.
Any geek's 'bat-cave' is testament to this need: a Mac Plus sitting underneath a table next to Commodore 64. On top of the table lays a 486 DX/2 PC with Super VGA and a Soundblaster compatible card decaying inside. Everywhere strings of SCSI, RS-232, Ethernet spaghetti encircle cases of floppies (both 5.25 and 3.5). Where's the data? Anything precious has made it's way through different formats to whatever you're on now. Everything else is slowly rotting away.
Outsourcing the batcave to this ubiquitous seems much more appetizing to me. Plus it leaves me room for my 1st and 2nd gen Transformer collection.
I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine.
I'm always shocked that this hasn't happened faster. I've expected Dropbox, Amazon, Google and Apple to move into this space more aggressively, but at best they've all only just scraped the surface of what's possible.
But Google has moved into that space! The general populace just didn't care too much.
Chromebooks do exactly what he's describing right off the bat.
Every Chromebook you own dies in a fire? Unbox a new one, boot up, everything is identical.
 I think part of the reason for this is that just as nobody wants to think about things like death, NOBODY wants to think about data loss and prevention. It's like death, but harder to understand and to the common folk most people aren't even aware that there are options in this sphere.
Even though a lot of people just use Facebook, email, IM and word processing, its going to be hard to convince them that something like a chromebook is a really good idea because the reasons its technically sweet are lost on them.
> Every Chromebook you own dies in a fire? Unbox a new one, boot up, everything is identical.
unless of course some algorithm at Google decided that you are a bot and/or spammer. At that point, all your Chromebooks are useless and all your data is gone without any way for you to get it back.
See the Amazon Kindle post from a cole of days ago.
Don't get me wrong. In general I love the idea of having my data somewhere in the cloud where somebody else is working on keeping it save. But in case of emergencies, I would love to have somebody to talk to who is willing (and able) to help me.
Unfortunately all services coming close to this vision are too big to be able to afford any customer support it seems.
It is really a shame, bordering on suspicious, that Google Takeout doesn't include any approximately realtime sync APIs for maintaining an off-cloud copy of your life. Even at Microsoft's worst proprietary height, they could never delete your data.
Why should you have to do that? Why can't google maintain proper backups? They could charge for it.
Of course, that means they would need to guarantee they will never, ever, cut you off from your data, not even if you defraud them. Under EU law they basically have a legal obligation to do that, but sadly they don't feel compelled to comply with their legal obligations without a court ordering them to.
By the way, if you've never had microsoft lose your data for you, you haven't been using their software for very long or been very very lucky. Admittedly, when microsoft did it they didn't mean to. Google and amazon mean to.
Setting up and email or IM client to save your emails and IMs is pretty easy. Google Drive has a desktop client that syncs some documents in real-time (although it excludes documents made in Google Docs itself). You can use Picasa to sync photos from Instant Upload (which is inexplicably not integrated with Google Drive yet). I think that just leaves music... did I miss anything?
This may be a matter of semantics but it seemed to me he pretty clearly avoided the word "cloud" in favor of "the network".
Based on his post it seems like he's looking for something that's a bit more robust from an infrastructure standpoint than what most cloud offerings are today. Eg, not "stitching together little microcomputers with HTTPS and ssh".
That appears to be one reason he's not buying that people have moved into the space.
I think the Chromebook missed the boat for consumers. If it had a 128GB SSD drive that automatically synched to Google Drive, then it would useful. Now it just doesn't work for photo, video, and music editing, as well as HD movie viewing on the road.
In its current configuration, it's an anemic iPad with a keyboard.
First Order of Business: The thought police have to weed out the service providers that won't play ball. We can't have any rogue independent thought enablers like Kim Dotcom floating around.
So for now, while there are enough bit players and small-time shops floating around, people are still wary about losing their data to fly-by-night operations.
Unreliable SSD's and Stuxnet infected flash drives have shaken user confidence in personal storage, but not enough. And it still doesn't seem possible to create enough doubt in HDDs, while selling the con job of cloud storage. Also, there're enough data breaches floating around, but most people just shrug, and whether they understand what it means, or even care is hard to discern...
Anyway, once all the captains of industry are on board, with their poster children like Pike parroting the party line, and when the all the "OMG LYFETIEM DATA GUARANTEES" seem more reliable than the normal hard drive warranties available to the common prole, it'll finally be possible to memory-hole the fuck out of anyone that steps out of line. (I'm looking at you, Mr. Assange)
C'mon man. The name of the game is "Boiling Frogs". It has to be done slowly, and carefully. I shouldn't have to explain this.
Oh man, I almost forgot we were SHEEPLE. wipes tear from eye Thanks for that.
In all seriousness, heavily encrypted, anonymous datablocks that people just 'back-up' on the net (I'm not advancing to the word cloud, sorry) is a pretty good idea. Trusting Google or anyone else to protect your raw data forever is probably not a good bet.
I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.
Interesting that someone so steeped in the "old ways" of Unix dumb terminals is also, seemingly, such a good matchup for the "far future" vision of Chrome OS. What's old is new again?
Rob Pike was one of the earliest advocates __against__ the use of traditional "dumb" terminals (including the "smart" ones, e.g., DEC's VT-100+ line). Early on, he developed Blit and brought the mouse to Unix. Later, he helped develop Plan 9, an operating system that's far more advanced (in some respects) than what we have today.
He is not and has never been "steeped in the 'old ways'".
"This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary."
"In summary, it used to be that phones worked without you having to carry them around, but computers only worked if you did carry one around with you. The solution to this inconsistency was to break the way phones worked rather than fix the way computers work."
You could only call, not get called unless you left precise descriptions of your whereabouts. Sometimes wait a lot until a phone was free. Having a phone on you all the time completely changed personal communication for the better. The idea that the way phones worked before mobile was better is so inane that I can't believe it's coming from a scientist.
What is better now with cellphones than before? Why is personal communication so much better now that we can be reached anywhere? I think it's a legitimate question.
In most cases, people indeed want to be reachable, but is it really a good thing? In the long run, does it make the society a better place, and people better people? How can we measure the impact of the cellphone on society?
I remember waiting in line up to half an hour every day to call back home my parents while on vacation because otherwise I wouldn't know if something important had happened and they would assume I was dead.
And, hey, did you remember buying a new card for payphones? Those things used to demagnetise at the speed of light.
I also remember that one time I locked myself out of my home and had to walk for 2km to get a the spare set from a friend (lets just hope he is there).
Yes I think overall it's really a good thing.
The issues you're describing has been solved by cellphones, but it doesn't mean it was the only solution. More payphones, more reliable phone cards (in France we use smartcards for that) would have done the trick.
That's interesting, but what would be the alternative?
Rob Pike's dream setup "is a computing world where I don't have to carry at least three computers - laptop, tablet, phone, not even counting cameras and iPod and other oddments - around with me in order to function in the modern world. The world should provide me my computing environment and maintain it for me and make it available everywhere. If this were done right, my life would become much simpler and so could yours."
In this situation, there would be public phones, public computers available in sufficient quantity everywhere. Think of the necessary amount of work to install, maintain, and upgrade such a setup. In a capitalist society, there would be different brands providing access to their own solutions, and thus a lot of Blackberry, Apple, or <your favorite carrier> kiosks all over the place.
You could think that with such a scheme, costs would prohibit a rapid evolution of services, as we witness them in current business models. Would we get the iKiosks updated each year with new firmware, or the latest multitouch screens? In comparison, how much money is being poured by individuals in upgrades for their personal stuff? What is the overall cost (i.e. money that could be spent on something else).
There's the real estate issue also. How much space would be consumed by those kiosks, and how much would it cost?
It would be interesting to estimate all the pros and cons of that dream setup, and compare it with our current situation.
The closest thing I can imagine to what he wants is something like a "t+15 years" version of the Motorola WebTop (and many other similar ideas). You carry a powerful processor with you and either use it "bare" or connect it to a standardized interface setup. You'll never get enough of those kiosks to completely get to what you described, but when your phone can be a perfectly capable standalone device as well, that becomes less important. You have the "docks" at home, work, and a few popular public places like airports, coffee shops, etc.
The idea of something like these dockable phones is almost certainly going to be a big part of where the industry is headed. It's just too obvious not to be. But it requires advances in both ubiquitous broadband availability and power and battery life available to mobile devices, so it's just not feasible right now.
But, in defense of local storage, if you wanted to use someone else's phone, you had to remember the phone number of who you were calling. This may or may not have been a problem. My mom always carried around a address book with everyone's contact info. Sure, she could call anyone from any phone, but she still needed some local storage with her.
The mousing is what adds a crazy amount of power. You end up with dozens of files on screen, some showing text, some just displayed as the "tag" line. You use the mouse to rapidly jump around them, select text from one, paste into another, etc.
The mouse is a pointing device, why not use it to point at things? If I have 3 columns in my acme window (which I usually do) and I'm working in the 3rd file down in the left-most column, it's faster to get to the top file in the right-most column by just grabbing the mouse and clicking than by futzing with key combos.
You make a good case for acme; I'd be willing to try it. But I'm not convinced that in any context grabbing the mouse (with or without chording) is faster than keying, esp. since in vim many of the most important navigational keys are on the home row.
You really need to try it, if nothing else it will broaden your experience so you can compare vi vs. emacs vs. acme. Myself, I watch vim users spend so much time just trying to get their cursors to the appropriate part of the file/line, and then select the appropriate text, that I scratch my head--this is faster? By the time you've figured out that typing "/whatev" should be sufficient to get you to the point you want, I could have grabbed the mouse, simply pointed it to the location I'm interested, and had my hands back on the keyboard already. Don't get me wrong, I also use vi a lot because it's very convenient and very powerful, but there are some things I'm much happier with in Acme.
I clicked the link seeing the high point count and thinking I might find some gems of productivity I could pull into my own daily routine, but instead I found some discussion of clouds and terminals and other things that are philosophically interesting but tangibly well argued and discussed ad nauseam.
Of course, the article is tiny and talks abstractly about something that everyone has an opinion on, so of course it has up lots of votes.
I like how his photo matches the colors of the blog design (pink & grey), obviously on purpose. This tells me that he pays attention to detail. Not every Unix hacker would change his clothes for some random interview on the web.
I don't think his phone system analogy is correct. Yes, you can pick up any phone and make a call, but you can't receive a call to 'your' number from any phone. i.e. there is still state associated with the phone network, and landlines are not portable in the same sense that mobiles and laptops are.
Some of us do not like the cloud as much as Rob Pike because we are worried about censorship and corporate misconduct (what do you do if your stuff is suddenly gone one day?).
The "always accessible" and "continue where you left" paradigmata we can still relate to though: I've been using screen in ssh / putty windows for almost 20 years now, I used VNC for some time (even wrote a 16 bit client for DOS that ran off a floppy disk and with 2MB RAM...) for the same purpose.
Something like VNC but with a "responsive" UI that adapts to the device currently used (tablet, laptop, desktop) while still retaining all the state needed for the user to continue where he left, that'd be something novel and useful after all these years.
Personally, I switched to a tiled WM because I didn't want to go back and forth between keyboard and mouse all the time. I want an editor which I can handle using keyboard alone, and only use the mouse when I really have to (web, drawing, games - perhaps I should use a trackball).
Acme relies too much on the mouse for me, but I like the idea a lot.
There's another couple of issues: syntax highlighting, large scale refactoring (which is not easy either with vi or emacs atm).
However, once you get accustomed to a certain workflow, it becomes difficult to accept something new. Maybe I should give it a shot.
I'm not sure what you mean by large scale refactoring, but refactoring is much easier in acme compared to any editor I used before because of the powerful sam command language and structural regular expressions.
I mean for instance changing the name of a value across multiple files, or fixing calls of a function whose prototype had been modified. That sort of things.
There are other services that an IDE can provide:
- name completion
- list of variables in current scope
- list of visible types
- access to documentation
- code annotation
All these could be implemented in external tools, but how snappy would the interface be?
The filter paradigm of Unix is great for batch processing, but fails when used interactively, unless processi are lightened to the point of becoming almost like threads within the same process, and with the matching communication speed (inter process communication using pipes isn't really that fast compared to shared memory).
That said, maybe large scale programs are the problem to begin with. With simple, one task focused programs, source code becomes easy to maintain without resorting to the services I listed above. It's not easy to get rid of them though (consider compilers for instance).
> I mean for instance changing the name of a value across multiple files, or fixing calls of a function whose prototype had been modified.
This is trivial in sam and acme because of the sam command language and structural regular expressions. See the links in my previous post. In fact not only that it's easy, I haven't yet found any alternative environment where the operation is so powerful. A few days ago I used sam to extract all structs in the Go tree that use a map but not a mutex, or structs that use the first ones and don't have a mutex either. I only cared about a select/find operation, but I could have paired it with a modify operation that could have added the mutex, for example. The scope and power of the language is unmatched.
> name completion
Personally I think it's worthless (and so do most acme users I know) but it's easy to implement as an external program. Jason Catena did it in one of his Inferno labs.
> list of variables in current scope
One sam command away.
> list of visible types
Same, or use Go.
> access to documentation
Right click on identifiers if you have plumber configured.
> code annotation
No idea what this is.
> inter process communication using pipes isn't really that fast compared to shared memory
Actually, it is. There are very few programs in the world where pipe throughput is not enough.
I'd like to try.
I don't program in Go btw.
> Actually, it is. There are very few programs in the world where pipe throughput is not enough.
I am very surprised by this. I always thought that programs made of threads were much faster than groups of processi, which have to communicate through OS channels, rather than a common memory space. Maybe I wasn't considering the fact that interactive programs have a lot of time to spare.
Usually, I expect my interactive programs to be snappy when I ask them something, even if I don't do it often at the CPU timescale. That's the real catch in user interaction.
Many of the things he says I agree with, others I don't. I think that's the thing with computing. Everyone's tastes are at least slightly different. We don't all want exactly the same things.
I think the big failure is programmers' inability to bring these desired advances, like what Plan 9 achieved, to a wider audience. I mean, he says he had this wondwerful environment at Bell Labs, but almost no one outside of Bell Labs gets to experience that pleasure. Why not? They open sourced it too late? I'm not sure I buy that. It's still better than UNIX, so what's changed? It's like there's some assumption that people just don't deserve anything better, and there's no point in working towards it. Except if you're at Bell Labs.
We're stuck with old UNIX, with all of its historical cruft. Like him, I've just learned to cope with it. (It's funny he's complaining about argv limits (see 2004 Slashdot interview). That seems to suggest he likes to compose super long argv's. No? Maybe he does not like xargs? I never did. But then I've seen similarly unexplainable limits in the Plan 9 port to UNIX. Why can't I have a Plan 9 sed command file with a very large number of commands?)
We could certainly have better. Perhaps it's simply a matter of getting behind the right projects, instead of just following the money and being lazy... working at Google and buying MacBook Pros. That's sort of like giving up. Complacency.
Honestly, "grep'ing the web" just doesn't sound all that "amazing" to me. I don't care how many servers they have running, Google is not Bell Labs.
"The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that."
Of course I'm not serious, but the Sunray is one stack of technology that offers stateful session management, not a system for using the same computing environment on whatever hardware you happen to have at hand.
I think it was a good post overall, but what got to me was his lack of doing anything about his issues.
When Mr. Pike wanted a revolutionary OS, plan9 was created.
When Mr. Pike wanted a language fixing problems dealt with in C, Go was created.
When it comes to a machine that's roll-able without persistent local storage, he merely wishes for it to be a reality? I can understand if, through working with Google, the only research in that area is tied to the Chromebook, but still. He certainly has the capability to cause influence (First link on hn), but he's not getting into the core of the problem. I love this guy just as much as the rest of the community, but I find it puzzling steps aren't already being taken to make this next dream of his a reality. I also agree that cloud is not the answer for everything, so it would be enlightening for a new tablet-esque roll able device to be made that swims against the general Mac-inspired cloud tablet trend. But if Rob Pike isn't going to make it a reality, I doubt someone else will release it in his vision or to his liking. Perhaps he has a few ideas or tricks to make things "just work". And that's what I'd look forward to.
The 9p protocol addresses a lot of the network activity described in the article, but I'm not sure what Rob Pike could do to make the hardware he describes a reality. All the examples you gave were software.
i will +1 his love for the 11" MacBook Air ... people are always skeptical that i can do pretty much all of my work on it (yes, even some coding), but after almost two years, i still haven't found its small size to be too limiting.
I lasted almost two months on my 11" MacBook Air before hooking it up to a 27" Thunderbolt Display in a fit of desperation. As my traveling machine it's fantastic, but I prefer my iMac as a full time dev machine.
I guess I assume that's a given. I'm getting one of those cheap 27" super high res displays off ebay to compliment my MBA. It handles everything I do, but I also have a server that I run my heavy code on because it's where my data is. At the same time, I pretty much constantly have a VM running on here and I get good enough battery life.
I also have a monitor though of course. I would go crazy alt+tabbing. Even now the width of my display can't handle my open tabs in Sublime.
edit: though... now that I sit here and realize what I'm saying, 80% of the time I code, I do it on the MBA. The DPI is such that I prefer the font when coding. This will change when I get my new display though. The laptop will sit on a dock and be my doc-dock or my stream a movie while I work-dock.
> That's a strange way to describe research and experimentation (though not strictly incorrect).
Many of the modern OS research papers that I have read take an existing solution and mess with it. As an example, many file system papers take ext or btrfs and do something to them (if we want to debate this statement, we can). I see this with replacement algorithms, packet schedulers, etc. Much experimentation happens but it is usually in the context of Linux or FreeBSD or some other popular existing solution these days.
There are papers that buck this more conventional approach like barrelfish and singularity and exokernels and L4 etc. etc. etc.
I do not think they are the status quo and I am/was not saying that plan9 is a bad thing, let me be clear on that.
>Plan 9 predates all of today's mainstream operating systems.
This is said with enough certainty without being true that I am allergic to it. All? Plan 9 started in the late 80's  OS X/iOS is a mach derivative which is circa 1985. The NT OS/2 project which begat modern Windows was started in 1988. BSD? All descendants of BSD which first released in 1977. Rob Pike complains about BSD's "cat -v" in the early 80's.
Maybe you are using "predate" with a different connotation but I don't know which. At the very best* it co-existed with the others.
I'm more interested in ideas themselves than details about how they're marketed. Plan 9 and its novelties are well known and published in detail. The authors of Linux, BSD, OSX, and NT are aware of Plan 9 and decided it wasn't worth imitating. What could be gained by wasting time lobbying to change their minds? They care only for what they and their customers want, and the average Joes that Microsoft and Apple sell operating systems to aren't exactly clamouring for them to be more like Plan 9. If you like an idea, then adopt it yourself. If not everyone shares your taste in operating system architecture, that's alright too.
I do have Plan 9 programs available to me be default, when I install Plan 9. If other people don't choose to do that, so what. I'm not a missionary. Research operating systems aren't a secret. The masses evidently aren't that interested.
Everything you can do with Plan 9 you can do with any other system anyways, only it's marginally more tedious. If people can tolerate that, that's alright with me. What should I care how the masses do their computing? Why should I pester people about it? I don't go around arguing to people how great it is to be a pro-skub star-bellied sneech who eats their bread butter side down. Operating system evangelism and holy wars are older than the internet, and I have become very tired of them.
Payphones are annoying. It might be broken, or have gum jammed in it, or someone might be using it, or you have to track one down, and you have to be sitting at it to receive a call.
Cellphones are in your pocket, come with your personal phone directory, are cheaper per minute, and you don't have to stand still to use them.
I'm really surprised that Rob thinks people would want to deal with a computer-as-payphone model. And wireless data networks suck ass in North America. His ideal world is probably at least 20 years away.
On the other hand, I love thin terminals. Screw local stateless networked computation. Give me a snappy remote interface to a beefy terminal server and i'm happy. That's an interface you literally can pick back up at any time with no performance cost due to being far away from the data.
I like his ideal, all the work being accessible no matter what device you've got in front of you but the whole idea only works with ubiquitous network connectivity. I commute a couple of hours most days through areas where the 3G quality is non-existant to poor, occasionally reaching 'Usable', so I'd be doomed to being unproductive. Caching gets you so far, but there's still plenty of rough spots to the idea.
It's definitely a nice idea though, maybe some day it'll happen. Nice to see Rob Pike getting a usesthis.com post, slightly (but not much) surprised to see him using Macs as his primary choice.
> What would be your dream setup?
I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve.
This is exactly my idea of changing from our current memory hierarchy to take advantage of the new SSD and cloud capabilities we have now: "Fat Cache."
I doubt the awesome bits of plan9 that he compared to a telephone (pay phone) can be replicated by a higher level service like amazon/google cloud something, git or some other dvcs. 9p is a protocol after all.
He also mentioned doing computing somewhere else, and not just storing data in the cloud.
So please tell me I just need to configure git differently or sign up for amazons newest whatever.
My dream setup, then, is a computing world where I don't have
to carry at least three computers - laptop, tablet, phone, not
even counting cameras and iPod and other oddments - around
with me in order to function in the modern world. The world
should provide me my computing environment and maintain it for
me and make it available everywhere. If this were done right,
my life would become much simpler and so could yours.
Can you go to any other Microsoft Surface and see exactly what you saw on your own, including (but not limited to) your programs, data, and configuration/profile? Also, will the state of that other Microsoft Surface be exactly what yours was when you left it?
That is, will your browser window located at (314, 159) with 6 tabs open and an HN post written in tab #2 appear on this other Microsoft Surface exactly as you left it on your own?
If not, then the Microsoft Surface may not fit the bill. This is what you get with Plan 9 and a shared file server.
Not entirely but 90% of it. Your programs, data and profile will move around and be cached if you sign in with a Live account. Some application state may be persisted differently but when you sign in to another PC, it'll all move across slowly.
plan9 doesn't do what he wants either. You can't detach a window station from the 9p network and take it and all your data with you. Communication networks aren't reliable enough to maintain a 9p connection either.
At least surface handles this...
Hell RDP'ing into the office is the same as what he wants...
Using ssh, screen, and a prgmr instance, I can access my stuff and have the same state for almost all of my dev work anywhere I can use a chrome browser with the ssh plugin or any ssh terminal app. That is pretty close to what he's talking about I suppose. As of now my iphone, ipad, and laptop can all jump on my little instance and I'm good to go.
That being said, I don't think going 100% cloud is for everybody yet, but we're getting closer.
Using ssh and screen gives you a 1975 interface to the computer. You've just made your own dumb terminal, doing EVERYTHING on the remote system except printing out a stream of characters to the local screen. It's not even as smart as the Blit that Rob Pike designed in 1982; at least that had the capability to fetch configuration from the server to set itself up.
I dream of Rob's dream setup as well. I suspect many people do. But I hope it shouldn't be too hard to make it a reality.
A smart phone which is always connected to remote storage (the cost could be a problem here?), and how about an electronic paper keyboard which rolls up to the size of a pen, that's somehow attached to or part of the phone.
Sit down somewhere, connect, unroll the keyboard, and there is your computing environment.
I like the no-local-storage dream. But what about when you're at the airport or the hotel network is broken or on holiday in indochina, and want to do some hacking? There's probably no network, or if there is it'll be expensive, slow and unreliable. Though I struggle with git, I love it that you can easily keep a tree pulled (even if you don't actively use it), and use that when you need it.
I think you will have an easier time learning the environment if you use a 3-button mouse, but it's not a requirement. I spend some entire days in acme on a MacBook Air, and the keyboard modifiers and %-C, %-V, %-X keyboard shortcuts on the Mac work well for me.
As Russ (rsc) says, it is not a requirement, but it is easier in my opinion. There are two common operation classes: sweeping while a button is held down and chording invocation. Sweeping is easy, as soon as you learn to hold down a key while sweeping.
Chording is mostly used - except for one specific chord which isn't extremely common to use - for Cut/Copy/Paste. And you already have that on the Cmd-bindings, so they are not that needed.
Surely he can use gestures instead of physical buttons?
Certainly gestures make the difference between one and two button trackpads a non-issue — if anything, I find it easier and more natural than reaching for a separate button — and it can't be too difficult to scale this to three buttons.
Did something change recently? Every time I've tried to install plan9 in virtualbox or vmware it ran incredibly slow. Not like "wow this is slow" slow, but "there is something broken here" slow. Just the installation took several hours.
Some very interesting stuff here. Everyone should look at this screencast about the acme text editor: http://research.swtch.com/acme. This thing is wild and crazy in a way that's totally unexpected. I have no idea if the ideas are actually good, but they sure are different.
I have to say, it is an amazing concept and very tempting, especially the way you create links in unstructured text. The only reason I am not trying it out right now is that it would require an X-server wherever I go in order to do proper editing and my current tmux set-up wouldn't work.
I'm hesitant to move to the cloud until privacy issues are fundamentally addressed. All these vendors pouring over user data is not in our interest. I think it will take laws and government action to stop this sort of privacy invasion. Until that happens, I'll keep my local storage.
I've wondered for a while now, why isn't Plan 9 rising along with cloud computing? It seems like it would mesh very well with having many computers connected in a network, and might make tasks like massively distributed map/reduce more accessible.
What part would you be interested in? Pretty much everything that Plan 9 has is available in some kind of fashion for other operating systems, often in a more specialized manner. And by the time you hack the performance and compatibility into Plan 9's kernel and drivers (if its minimalism would allow it), you could port/re-implement most of what you'd be missing.
The main part that made this distributed nature possible was 9P, a network protocol/file system. And distributed file systems are a dime a dozen now. Hadoop, S3, GFS, GlusterFS are all vital elements of "the cloud", never mind old staples like NFS, AFS etc.
For "cloud needs", they're better suited (fault tolerance, performance…). The big thing about Plan 9 (and Bell Laps software in general) wasn't its novelty, but its scope, narrowing down everything in a typical network environment down to a few protocols, APIs and GUIs. Not really all that applicable for serving bits to browsers.
Never mind the bandwidth. Plan 9 was mostly text in a LAN.
Count me in as another "synchronized local storage" person, for the typical two reasons: I don't want to rely on a connection to the cloud, and I don't want to have to trust the cloud storage providers.