I want STORAGE ON EVERY DEVICE (not volatile!), and and automatic system to sync it with all my other devices, WITHOUT NEEDING THE CLOUD, just set up and ad-hoc mesh network and sync everyth (yeah, there's gonna be smth like "merges" for OS settings and music collection changes but I can do with that). The "cloud" should be just infrastructure, nothing else added, and I shouldn't be distrupted when my connection to it fails... "Always connected"? No, no, no, I'll always want to be able to work offline and be able to sync/merge/push/pull even my OS, its settings and software (and be able to "branch" my and keep multiple versions of software and all that).
DVCS should be the models for how to do everything in the cloud, with simpler interfaces for different level of user needs/competency.
Rob Pike's ideal of "homogeneity" in computing really misses the distinction between distributed and central syncing, the security and reliability implications etc. ...and large local storage capacity and "enough" computing power on all devices is needed for this. I'd rather be "part of the mesh" than "connected to the cloud mesh", because I think the distinctions are important and they require different things from "client devices" (all devices should be "clients"! no servers "in the cloud" for me please!)
When I was on Plan 9, everything was connected and uniform. Now
everything isn't connected, just connected to the cloud, which
isn't the same thing.
...but git annex is AF*GMAZING indeed :)
Built out the XMPP push notifier; around 200 lines of code. Haven't tested it yet,
but it just might work. It's in the xmpp branch for now. (...)
That said, the future tense in my post wasn't an accident ;) but you're right, the Assistant is still under heavy development and lacks some parts.
Of course, git-annex itself is robust and stable, and awesome!
I never have to backup those files because they're replicated to any running machine with Dropbox installed. One of those machines has my dropbox share on a Drobo, so I'd need a catastrophic failure of my Drobo disks, Dropbox and my laptop in order to lose those files.
But you really should have a separate actual backup of your data though. What would happen in the scenario that somehow your drop-box data is deleted ? Won't this deletion also be replicated to all your synced devices ?
...but most other non IT professionals using computers (doctors, physicists, most non-software engineers etc.) don't do this and are pretty far from using things this way ...they are more likely to be "fished into" cloud solutions that will have them "hanging from the cloud (someone else's cloud)" than actually be "part of a/the mesh/cloud" ...it's easy for us to forget how things work for "the rest of world" (and I don't mean computer illiterate or dumb people, just smart persons that don't happen to be neck deep in coding or other IT specific activity...)
Carrying the cellphone means two things: The state data is available with you (it's not in the cloud) and the actual hardware is your authentication token for the things that are in the cloud.
In his vision he talks about having state and everything in the cloud, login in from any terminal and doing his job.
He would go to a coffee/Internet shop and do all his personal banking there. It should be obvious at this point the problems that could arise from such setup.
The terminal could be compromised (key loggers, etc), and no authentication tokens have been defined. Even retina scanners are more troublesome than having a cellphone as the auth token.
So his vision is nice, in an ideal society where no one cheats, ever.
In the real world it is dangerously unsafe.
> DVCS should be the models for how to do everything in the cloud, with simpler interfaces for different level of user needs/competency.
I still dream about a Git GUI that works as a distributed Dropbox (for some specially enabled repos, not for all of them).
Please look above in this thread about git-annex if you haven't heard of it. That's.. pretty much exactly what it is.
I'm actually pretty excited about git-annex. The Windows GUI will probably be neglected, but as long as the pipes are good, the chrome can be fixed easily.
I will be testing this, thanks for the link.
> "This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary. I sorely miss the unified system view of the world we had at Bell Labs, and the way things are going that seems unlikely to come back any time soon."
To understand what he's talking about, keep in mind that Pike was a lead engineer on Plan9, a project aiming at creating the exact environment he's describing. And although the project never left the lab, it did affect the lives of the scientists working there. There are many reasons why Plan9 never went mainstream, but one thing is sure it was not technical inferiority.
I don't think Pike is pushing for any agenda. I tried Plan9, and I see what he's talking about. Lack of widespread support and the relatively small number of available apps make it difficult to be the "primary" OS I use, but the design concepts like "everything is a file", union mounts or 9P are amazing and work like he describes.
When a world renown OS scientist is asked about his "ideal setup", what answer would you expect? "I wish I could get the latest version of MacOSX and the newest chip by Nvidia"??
It's certainly easier with your average Mac user's "I write my code in TextMate, use Things for GTD and manage my foodie photo library in Sepia Extreme".
Plan9 is marvellous. The only reason I do not install it for my everyday use is that it is not a mainstream OS (with up to date chrome, openoffice, XBMC and virtualbox)
Grab a decent Linux box, shove it under your desk, and use VNC or X forwarding to display those programs on your Plan 9 terminal. You can keep your storage on a Plan 9 file server, then have your Linux box mount that (9p support is in Linux).
Or, if you're willing to put up with some slightly outdated software, Linuxemu can be a decent choice.
It's doable. I did an awful lot of my graduate work sitting at a Plan 9 terminal, from writing code to just browsing the web with Opera.
The idea of Plan 9 was not to have no user applications running on it.
The worldview bit was interesting because Dr Pike regards the way we access computing power as important and that the modes of access we have available may influence our lives. I think that he is right, but I prefer to have control of my stuff, so I tend towards synchronised local storage.
I would like to see a nice balance: I basically work locally, and anything I do is synced ASAP to "the cloud" and thence to other devices (and other people). But if I'm in the hills or the server goes down, hey, I still have a perfectly good computer in my pocket.
Pike doesn't address security either. Would you log into Gmail on a public computer? Would you enter your credit card info or visit your bank's website? I sure wouldn't. Phones from 20 years ago didn't have any storage or computational abilities, so these security concerns didn't exist for them.
While I'm a huge fan of cloud-based storage and synchronization, there are just too many issues with using untrusted computers everywhere I go. Maybe it was due to the interview format, but I'm surprised such an intelligent, technical thinker managed to avoid discussing the downsides of his proposal.
Let's go back to burning ROMs for OS upgrades instead of flashing/storing them on disk! People can replace batteries, so they shouldn't have trouble replacing an OS ROM ..
It's a shame that (for example) Windows doesn't make it easier to make the entire system partition read-only, but applications are traditionally so abysmally poor at handling "no you can't write THERE" errors (yes, Photoshop, I'm looking at you) that most people gave up. UNIXes are better, but still at the mercy of morons (Acrobat has managed to drop files called "C:\nppdf32Log\debuglog.txt" all over my home directory, so it's probably just as well it's too fucking stupid to realise it's running on RHEL otherwise it'd be asking for root all the time instead)
One nice compromise that I'm sure Pike has used is to have your "dumb terminal" be a laptop with Plan 9 installed. When you boot a Plan 9 terminal, you can tell it to get the root from the network or from the local disk. If you select one, though, it's quite easy to also make the other available, and sync your data back and forth.
"...can you guarantee that the server itself won't go down?" No, but neither can I guarantee that the hard drive in that Macbook you've been abusing for the last 3 years won't suddenly fail and wipe out your work.
If your device has frequent (not necessarily constant) internet access this is essentially the same thing as having many regular computers, since your cache can allow you to work offline just like normal. Like git, you would just work on the local "cache", and then sync with other devices when you need to switch or regain internet access.
If I misunderstood him, well, good, I'm glad we're not so far apart. But I don't think so.
Obviously things didn't work out that way, and now it's actually more reliable to carry around expensive and rather-easily-breakable computers with you. It's kind of surprising if you think about just how fragile a laptop is.
For me, Dropbox has brought me that.
I still use Github and the like, but my computers now share such a similar setup (all Linux, install Go, install Dropbox, install Sublime Text 2, done) that I can walk out of the office without doing anything special to my machine, go home and pick up literally where I left off.
My git repositories are cloned into my Dropbox folders so that when I move from one place to another but am not ready to check in (local branch in state of flux) I still have that in multiple locations.
As Sublime Text 2 stores the project and file info in a plain text file, that state comes with me too.
My $GOROOT is also in my Dropbox folder, so if I've grabbed something via "go get" that also follows me around.
I view Dropbox as an ever present working cache, not as storage. Things like documents are in Google Drive and accessed via the browser.
On Friday I went to a meeting at 3pm that I thought would just be 20 minutes. It turned out that it took 3 hours, and I hadn't closed ST2 or anything I was working on... no problem, I went home instead of back to the office and my work was exactly where I left off with the same files open in ST2.
I think the only thing that doesn't follow with me are the undo buffers in ST2.
It is awfully nice to have a persistent environment. Dropbox is definitely partway towards that goal, but it doesn't hammer in the nail fully.
I can write on multiple unsaved documents with TextEdit, close TextEdit, and all the documents open up in their unsaved state as I open TextEdit. It's quite nice. But to have these features, requires too much of a workflow change to what people are used to, and the LaLa Land of it breaks down right after the user hits an application that does not implement the system, and loses their data (by clicking on don't save or assuming that everything is recovered after a crash).
More info on Auto Save and Versions: http://support.apple.com/kb/HT4753
I love it even more with enc_fs on top of it. I don't even need to mount enc_fs (I don't automount it), and Dropbox will still sync my work like a charm.
It's as if a majority of the hardcore hackers have gone "Fuck it, I'm being watched anyway, they might as well be backing up my stuff as well."
Are we past the point of it being a topic of debate except by people like me who have the illusion of choice?
My dream setup, then, is a computing world where I don't have to carry at least three computers - laptop, tablet, phone, not even counting cameras and iPod and other oddments - around with me in order to function in the modern world.
Any geek's 'bat-cave' is testament to this need: a Mac Plus sitting underneath a table next to Commodore 64. On top of the table lays a 486 DX/2 PC with Super VGA and a Soundblaster compatible card decaying inside. Everywhere strings of SCSI, RS-232, Ethernet spaghetti encircle cases of floppies (both 5.25 and 3.5). Where's the data? Anything precious has made it's way through different formats to whatever you're on now. Everything else is slowly rotting away.
Outsourcing the batcave to this ubiquitous seems much more appetizing to me. Plus it leaves me room for my 1st and 2nd gen Transformer collection.
I'm always shocked that this hasn't happened faster. I've expected Dropbox, Amazon, Google and Apple to move into this space more aggressively, but at best they've all only just scraped the surface of what's possible.
Chromebooks do exactly what he's describing right off the bat.
Every Chromebook you own dies in a fire? Unbox a new one, boot up, everything is identical.
 I think part of the reason for this is that just as nobody wants to think about things like death, NOBODY wants to think about data loss and prevention. It's like death, but harder to understand and to the common folk most people aren't even aware that there are options in this sphere.
Even though a lot of people just use Facebook, email, IM and word processing, its going to be hard to convince them that something like a chromebook is a really good idea because the reasons its technically sweet are lost on them.
unless of course some algorithm at Google decided that you are a bot and/or spammer. At that point, all your Chromebooks are useless and all your data is gone without any way for you to get it back.
See the Amazon Kindle post from a cole of days ago.
Don't get me wrong. In general I love the idea of having my data somewhere in the cloud where somebody else is working on keeping it save. But in case of emergencies, I would love to have somebody to talk to who is willing (and able) to help me.
Unfortunately all services coming close to this vision are too big to be able to afford any customer support it seems.
Of course, that means they would need to guarantee they will never, ever, cut you off from your data, not even if you defraud them. Under EU law they basically have a legal obligation to do that, but sadly they don't feel compelled to comply with their legal obligations without a court ordering them to.
By the way, if you've never had microsoft lose your data for you, you haven't been using their software for very long or been very very lucky. Admittedly, when microsoft did it they didn't mean to. Google and amazon mean to.
Based on his post it seems like he's looking for something that's a bit more robust from an infrastructure standpoint than what most cloud offerings are today. Eg, not "stitching together little microcomputers with HTTPS and ssh".
That appears to be one reason he's not buying that people have moved into the space.
In its current configuration, it's an anemic iPad with a keyboard.
So for now, while there are enough bit players and small-time shops floating around, people are still wary about losing their data to fly-by-night operations.
Unreliable SSD's and Stuxnet infected flash drives have shaken user confidence in personal storage, but not enough. And it still doesn't seem possible to create enough doubt in HDDs, while selling the con job of cloud storage. Also, there're enough data breaches floating around, but most people just shrug, and whether they understand what it means, or even care is hard to discern...
Anyway, once all the captains of industry are on board, with their poster children like Pike parroting the party line, and when the all the "OMG LYFETIEM DATA GUARANTEES" seem more reliable than the normal hard drive warranties available to the common prole, it'll finally be possible to memory-hole the fuck out of anyone that steps out of line. (I'm looking at you, Mr. Assange)
C'mon man. The name of the game is "Boiling Frogs". It has to be done slowly, and carefully. I shouldn't have to explain this.
In all seriousness, heavily encrypted, anonymous datablocks that people just 'back-up' on the net (I'm not advancing to the word cloud, sorry) is a pretty good idea. Trusting Google or anyone else to protect your raw data forever is probably not a good bet.
Interesting that someone so steeped in the "old ways" of Unix dumb terminals is also, seemingly, such a good matchup for the "far future" vision of Chrome OS. What's old is new again?
He is not and has never been "steeped in the 'old ways'".
"This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary."
"In summary, it used to be that phones worked without you having to carry them around, but computers only worked if you did carry one around with you. The solution to this inconsistency was to break the way phones worked rather than fix the way computers work."
You could only call, not get called unless you left precise descriptions of your whereabouts. Sometimes wait a lot until a phone was free. Having a phone on you all the time completely changed personal communication for the better. The idea that the way phones worked before mobile was better is so inane that I can't believe it's coming from a scientist.
In most cases, people indeed want to be reachable, but is it really a good thing? In the long run, does it make the society a better place, and people better people? How can we measure the impact of the cellphone on society?
Although I think this strategy fits well the planned obsolescence policies we have widely deployed in the tech industry nowadays.
Rob Pike's dream setup "is a computing world where I don't have to carry at least three computers - laptop, tablet, phone, not even counting cameras and iPod and other oddments - around with me in order to function in the modern world. The world should provide me my computing environment and maintain it for me and make it available everywhere. If this were done right, my life would become much simpler and so could yours."
In this situation, there would be public phones, public computers available in sufficient quantity everywhere. Think of the necessary amount of work to install, maintain, and upgrade such a setup. In a capitalist society, there would be different brands providing access to their own solutions, and thus a lot of Blackberry, Apple, or <your favorite carrier> kiosks all over the place.
You could think that with such a scheme, costs would prohibit a rapid evolution of services, as we witness them in current business models. Would we get the iKiosks updated each year with new firmware, or the latest multitouch screens? In comparison, how much money is being poured by individuals in upgrades for their personal stuff? What is the overall cost (i.e. money that could be spent on something else).
There's the real estate issue also. How much space would be consumed by those kiosks, and how much would it cost?
It would be interesting to estimate all the pros and cons of that dream setup, and compare it with our current situation.
The idea of something like these dockable phones is almost certainly going to be a big part of where the industry is headed. It's just too obvious not to be. But it requires advances in both ubiquitous broadband availability and power and battery life available to mobile devices, so it's just not feasible right now.
Stealing phones wouldn't be an issue anymore, except for people using trivial passwords, or writing their password on the phone.
Here is a tutorial.
The mouse is a pointing device, why not use it to point at things? If I have 3 columns in my acme window (which I usually do) and I'm working in the 3rd file down in the left-most column, it's faster to get to the top file in the right-most column by just grabbing the mouse and clicking than by futzing with key combos.
How last century. I thought he would have at least suggested a hologram projection screen combined with some bio implants. ;-)
Of course, the article is tiny and talks abstractly about something that everyone has an opinion on, so of course it has up lots of votes.
I am disappoint.
The "always accessible" and "continue where you left" paradigmata we can still relate to though: I've been using screen in ssh / putty windows for almost 20 years now, I used VNC for some time (even wrote a 16 bit client for DOS that ran off a floppy disk and with 2MB RAM...) for the same purpose.
Something like VNC but with a "responsive" UI that adapts to the device currently used (tablet, laptop, desktop) while still retaining all the state needed for the user to continue where he left, that'd be something novel and useful after all these years.
There's another couple of issues: syntax highlighting, large scale refactoring (which is not easy either with vi or emacs atm).
However, once you get accustomed to a certain workflow, it becomes difficult to accept something new. Maybe I should give it a shot.
There are other services that an IDE can provide:
- name completion
- list of variables in current scope
- list of visible types
- access to documentation
- code annotation
All these could be implemented in external tools, but how snappy would the interface be?
The filter paradigm of Unix is great for batch processing, but fails when used interactively, unless processi are lightened to the point of becoming almost like threads within the same process, and with the matching communication speed (inter process communication using pipes isn't really that fast compared to shared memory).
That said, maybe large scale programs are the problem to begin with. With simple, one task focused programs, source code becomes easy to maintain without resorting to the services I listed above. It's not easy to get rid of them though (consider compilers for instance).
This is trivial in sam and acme because of the sam command language and structural regular expressions. See the links in my previous post. In fact not only that it's easy, I haven't yet found any alternative environment where the operation is so powerful. A few days ago I used sam to extract all structs in the Go tree that use a map but not a mutex, or structs that use the first ones and don't have a mutex either. I only cared about a select/find operation, but I could have paired it with a modify operation that could have added the mutex, for example. The scope and power of the language is unmatched.
> name completion
Personally I think it's worthless (and so do most acme users I know) but it's easy to implement as an external program. Jason Catena did it in one of his Inferno labs.
> list of variables in current scope
One sam command away.
> list of visible types
Same, or use Go.
> access to documentation
Right click on identifiers if you have plumber configured.
> code annotation
No idea what this is.
> inter process communication using pipes isn't really that fast compared to shared memory
Actually, it is. There are very few programs in the world where pipe throughput is not enough.
I don't program in Go btw.
> Actually, it is. There are very few programs in the world where pipe throughput is not enough.
I am very surprised by this. I always thought that programs made of threads were much faster than groups of processi, which have to communicate through OS channels, rather than a common memory space. Maybe I wasn't considering the fact that interactive programs have a lot of time to spare.
Usually, I expect my interactive programs to be snappy when I ask them something, even if I don't do it often at the CPU timescale. That's the real catch in user interaction.
I think the big failure is programmers' inability to bring these desired advances, like what Plan 9 achieved, to a wider audience. I mean, he says he had this wondwerful environment at Bell Labs, but almost no one outside of Bell Labs gets to experience that pleasure. Why not? They open sourced it too late? I'm not sure I buy that. It's still better than UNIX, so what's changed? It's like there's some assumption that people just don't deserve anything better, and there's no point in working towards it. Except if you're at Bell Labs.
We're stuck with old UNIX, with all of its historical cruft. Like him, I've just learned to cope with it. (It's funny he's complaining about argv limits (see 2004 Slashdot interview). That seems to suggest he likes to compose super long argv's. No? Maybe he does not like xargs? I never did. But then I've seen similarly unexplainable limits in the Plan 9 port to UNIX. Why can't I have a Plan 9 sed command file with a very large number of commands?)
We could certainly have better. Perhaps it's simply a matter of getting behind the right projects, instead of just following the money and being lazy... working at Google and buying MacBook Pros. That's sort of like giving up. Complacency.
Honestly, "grep'ing the web" just doesn't sound all that "amazing" to me. I don't care how many servers they have running, Google is not Bell Labs.
It exists! It's called a sunray
Of course I'm not serious, but the Sunray is one stack of technology that offers stateful session management, not a system for using the same computing environment on whatever hardware you happen to have at hand.
Pretty good, but I agree it isn't yet enough.
I've briefly tried working on a small screen with high DPI and it really hurt my eyes. Maybe I'm getting old-man's eyes. ;)
I hadn't anticipated this but since every resolution looks excellent on a retina display, I'm switching quite often for different tasks.
If someone read that as implying that my netbook was slow, therefore a MacBook Air would be too, I have to question their reading comprehension skills.
(I mostly do Cocoa dev in Xcode these days.)
I also have a monitor though of course. I would go crazy alt+tabbing. Even now the width of my display can't handle my open tabs in Sublime.
edit: though... now that I sit here and realize what I'm saying, 80% of the time I code, I do it on the MBA. The DPI is such that I prefer the font when coding. This will change when I get my new display though. The laptop will sit on a dock and be my doc-dock or my stream a movie while I work-dock.
This intersection between an obsession with minimalism of a particular sort and dispossession is unknown to me. Seems common to the Go programmers I talk to.
This is not to say their ideas are bad! Some of them are great but many of the great ones are brought into the mainstream thru unix (procfs, utf-8) and C (unnamed substructures) :)
I just wished they focused on making the mainstream better instead of this more indirect route...
That's a strange way to describe research and experimentation (though not strictly incorrect).
>I just wished they focused on making the mainstream better
Plan 9 predates all of today's mainstream operating systems.
Many of the modern OS research papers that I have read take an existing solution and mess with it. As an example, many file system papers take ext or btrfs and do something to them (if we want to debate this statement, we can). I see this with replacement algorithms, packet schedulers, etc. Much experimentation happens but it is usually in the context of Linux or FreeBSD or some other popular existing solution these days.
There are papers that buck this more conventional approach like barrelfish and singularity and exokernels and L4 etc. etc. etc.
I do not think they are the status quo and I am/was not saying that plan9 is a bad thing, let me be clear on that.
>Plan 9 predates all of today's mainstream operating systems.
This is said with enough certainty without being true that I am allergic to it. All? Plan 9 started in the late 80's  OS X/iOS is a mach derivative which is circa 1985. The NT OS/2 project which begat modern Windows was started in 1988. BSD? All descendants of BSD which first released in 1977. Rob Pike complains about BSD's "cat -v" in the early 80's.
Maybe you are using "predate" with a different connotation but I don't know which. At the very best* it co-existed with the others.
Everything you can do with Plan 9 you can do with any other system anyways, only it's marginally more tedious. If people can tolerate that, that's alright with me. What should I care how the masses do their computing? Why should I pester people about it? I don't go around arguing to people how great it is to be a pro-skub star-bellied sneech who eats their bread butter side down. Operating system evangelism and holy wars are older than the internet, and I have become very tired of them.
We're working on making Go a mainstream language.
Cellphones are in your pocket, come with your personal phone directory, are cheaper per minute, and you don't have to stand still to use them.
I'm really surprised that Rob thinks people would want to deal with a computer-as-payphone model. And wireless data networks suck ass in North America. His ideal world is probably at least 20 years away.
On the other hand, I love thin terminals. Screw local stateless networked computation. Give me a snappy remote interface to a beefy terminal server and i'm happy. That's an interface you literally can pick back up at any time with no performance cost due to being far away from the data.
It's definitely a nice idea though, maybe some day it'll happen. Nice to see Rob Pike getting a usesthis.com post, slightly (but not much) surprised to see him using Macs as his primary choice.
This is exactly my idea of changing from our current memory hierarchy to take advantage of the new SSD and cloud capabilities we have now: "Fat Cache."
I doubt the awesome bits of plan9 that he compared to a telephone (pay phone) can be replicated by a higher level service like amazon/google cloud something, git or some other dvcs. 9p is a protocol after all.
He also mentioned doing computing somewhere else, and not just storing data in the cloud.
So please tell me I just need to configure git differently or sign up for amazons newest whatever.
This seems relevant as well http://doc.cat-v.org/bell_labs/utah2000/
My dream setup, then, is a computing world where I don't have
to carry at least three computers - laptop, tablet, phone, not
even counting cameras and iPod and other oddments - around
with me in order to function in the modern world. The world
should provide me my computing environment and maintain it for
me and make it available everywhere. If this were done right,
my life would become much simpler and so could yours.
"The world should provide me my computing environment and maintain it for me and make it available everywhere."
In the movie Tony (Iron Man) can control the infrastructure around him with this device, just like Rob Pike described.
That is, will your browser window located at (314, 159) with 6 tabs open and an HN post written in tab #2 appear on this other Microsoft Surface exactly as you left it on your own?
If not, then the Microsoft Surface may not fit the bill. This is what you get with Plan 9 and a shared file server.
plan9 doesn't do what he wants either. You can't detach a window station from the 9p network and take it and all your data with you. Communication networks aren't reliable enough to maintain a 9p connection either.
At least surface handles this...
Hell RDP'ing into the office is the same as what he wants...
That being said, I don't think going 100% cloud is for everybody yet, but we're getting closer.
A smart phone which is always connected to remote storage (the cost could be a problem here?), and how about an electronic paper keyboard which rolls up to the size of a pen, that's somehow attached to or part of the phone.
Sit down somewhere, connect, unroll the keyboard, and there is your computing environment.
And I'm a bit disappointed that he's not using Plan9. With the current ease of VMs there are no driver and installation issues and he could move the snapshot around.
Still I prefer to have a mouse when I am working on larger texts because it is so much easier to do text rearrangement with a mouse.
A correctly set up acme on a mac is surprisingly effective.
I'm hoping to give acme a serious try (I have it installed), but I need to be sure I won't have to start using a three button mouse rather than whatever is provided by my system.
For anyone interested: You may want to purchase an HP DY651A. It's an affordable 3-button optical USB mouse (and, yes, it's a real 3-button one).
Chording is mostly used - except for one specific chord which isn't extremely common to use - for Cut/Copy/Paste. And you already have that on the Cmd-bindings, so they are not that needed.
Certainly gestures make the difference between one and two button trackpads a non-issue — if anything, I find it easier and more natural than reaching for a separate button — and it can't be too difficult to scale this to three buttons.
The main part that made this distributed nature possible was 9P, a network protocol/file system. And distributed file systems are a dime a dozen now. Hadoop, S3, GFS, GlusterFS are all vital elements of "the cloud", never mind old staples like NFS, AFS etc.
For "cloud needs", they're better suited (fault tolerance, performance…). The big thing about Plan 9 (and Bell Laps software in general) wasn't its novelty, but its scope, narrowing down everything in a typical network environment down to a few protocols, APIs and GUIs. Not really all that applicable for serving bits to browsers.
Never mind the bandwidth. Plan 9 was mostly text in a LAN.
For a serious question, does acme lend itself to a chording keyboard? It seems like this is taking another step towards Engelbert's vision in the mother of all demos.