
Emacs or vi: neither (Rob Pike) - _acme
https://usesthis.com/interviews/rob.pike/
======
white-flame
Rob Pike has apparently never had an internet outage, or traveled outside
modern civilization?

Independence is a virtue. Even in the old phones-as-infrastructure case, there
were many times when there was simply no working phone around, or available.
Broken down on the road, being in an industrial area of town on the weekend,
etc.

If everything your life revolves around requires working and accessible
infrastructure to access it, you're going to have times where you're left
staring at a connection error screen, or no screen at all. And because the
universe hates you, that will happen at the worst possible times. ;-)

~~~
notacoward
He's at Google. Many people there seem to think of network connectivity the
same way we think of clean water, electricity, and natural gas - things that
are just always there, in whatever quantity you want. Ditto for storage and
computational power, because _for them_ all those things are like utilities.
Never mind that people in many parts of the world can't even take those "old
fashioned" utilities for granted. People who live in bubbles tend to lose
touch with the outside world.

~~~
apathy
Thinks like the Andrew File System and Coda were actually developed to try and
work around this, to provide resilience and allow one to have "soft" state in
the presence of outages. The interest was not there, certainly not when most
people were working on standalone Windows boxes or Macs, and didn't have any
idea of the sort of power that was available in research labs via X and
Ethernet.

It's a shame, because not only did research groups think about these issues,
20 solid years ago, but also several strong contenders for solutions were
implemented. Society did not see the value in supporting further work on the
projects and so the researchers moved on.

Pike is not an idiot. He is a pragmatist.

~~~
notacoward
How is it "pragmatic" to assume infinite connectivity instead of working to
solve the problems when that assumption fails? That word does not mean what
you want it to mean. I've worked on those problems for a couple of decades
too. I also bemoan the relative lack of attention given to them. That's
exactly why I find his attitude so disappointing.

~~~
apathy
It's pragmatic to delegate that aspect to other people who are/were working on
it. It is not pragmatic to try and solve all the different problems involved
all at the same time.

Pike was brought in to Google to do something about log management (hence
Sawmill -- it was a disaster prior to his arrival). The solution was
pragmatic: awk on GFS.

It would be nice if more attention to intermittent connectivity solved the
problem. The trouble is that I see no interest from research funders and not
much motive for commercial funders. As best as I can tell, it is in fact quite
pragmatic to leave that detail (important as it may be) aside until there's
enough interest in stateless "dumb" workstations to reintroduce it.

It's pragmatic to make flawed assumptions so that you can get something done
in the common-within-Google case, and come back later for the common-outside-
of-Google case, at least if you happen to be within Google.

~~~
notacoward
That would be fine if he had been talking about what he was doing within
Google, but he was specifically talking about _his dream system_. Here's an
analogy.

"I have a dream about a programming language and runtime that will
automatically determine how serial code can be distributed across a bunch of
machines, and do so without totally screwing up performance or failure
semantics. But that's somebody else's problem. I'm not even going to mention
any actual ideas about how to do it."

Would you say "pragmatic" is the right word to describe that? I wouldn't. We
left the realm of pragmatism as soon as we started talking about dreams. It's
not necessarily _wrong_ for me to mention such a dream. Such a compiler would
make my life a lot easier. However, it misses an opportunity to inspire would-
be collaborators in areas where I have concrete ideas and actually am working
to make a difference. It contributes less to actual progress than another
answer might have, and that's disappointing.

~~~
apathy
Fair enough.

------
solipsism
I personally think Rob's dream setup sounds horrible. I love having my own
phone -- it was shitty having to look for a pay phone or buy a beer in a bar
just to use the phone.

And I don't use my phone or laptop for the local storage. If I lose either one
I buy a new one and I reconnect to some accounts and I'm ready to go. That
doesn't mean i don't appreciate having my own that I can carry around.

Next time I'm hiking in the middle of the forest using Google Maps Offline and
want to take a pic of something, I'll think of Rob rolling out his paper map
trying to figure out how to get to the nearest camera rental place. Sorry, no
storage for pics or maps on your unrolling pen screen.

~~~
PhantomGremlin
_I 'll think of Rob rolling out his paper map_

I was just down in southern Oregon, and between the frequent "No Service" and
the shitty Google and Apple maps, I would have killed for a decent paper map.
It's a good thing I had my Garmin app on my phone; it's not reliant on cell
service.

My parents and I traveled 9000+ miles around the country in 1974, using a Rand
McNally atlas to guide us. The experience was far more pleasant than the maps
Google offered up just a few days ago. "Offline" wouldn't have helped, the map
quality is abysmal. Locations pop in and out of existence as you zoom in. Many
features are totally unlabelled (e.g. rivers). The UI sucks as well.

~~~
thisrod
I think there is a middle ground. Paper and contact does have a categorical
advantage for field use: at least, anything with batteries has a categorical
disadvantage. On the other hand, it's nice that you can print 1 map of your
own to cover terrain that used to be split over the corners of 4 published
maps. And it's a pretty cool feeling when you click on contours that the
surveyors got wrong and drag them into the right place.

There is a place for automation, but that place isn't everywhere.

------
notacoward
I actually sort of agree with his idea that our portable devices should be
stateless, but the way he puts it comes off way too much like "storage is
magic that someone else should deal with" for me. Scalable storage with a
usable interface (i.e. not an HTTP-based eventually-consistent object store)
is hard enough even in a LAN environment. Making it happen in a WAN
environment, which is what he's asking for, involves some very hard tradeoffs
and engineering problems equivalent to anything you might do with the data
once you get it. It's very disappointing to see what amounts to "other
people's problem" from someone of his stature.

~~~
apathy
That's a bit silly. His perspective is that of someone who watched tractable
solutions to the cached-state and resync/reconciliation problems be
implemented and then marginalized, since no one at the time saw the point.
Now, 20-30 years later, it's starting to dawn on people, and the response is
essentially a kludge.

I'm disappointed, and I was never at Bell Labs. I imagine Pike is being
diplomatic, to be honest. Research is incredibly frustrating when it depends
so much on fashion.

------
apathy
I do love this little pearl:

> In summary, it used to be that phones worked without you having to carry
> them around, but computers only worked if you did carry one around with you.
> The solution to this inconsistency was to break the way phones worked rather
> than fix the way computers work.

He's right, it used to be that I could jump from machine to machine in a Unix
environment and really the only thing I gave a shit about locally was my login
shell. (I started out using Irix in an SGI/AFS environment, so our logins were
essentially stateless; since openGL/XGL worked over the wire, I ran demos in
DC on hardware in NY just because, and it was fucking great).

I was sad when the majority of businesses and individuals seemingly decided
that Microsoft's single-user desktop worldview was preferable to the Bold Old
World of Unix and Plan9, which at that point, thanks to things like XGL and
the Andrew File System, was quite multi-user and network-centric. This is why
Pike's little pearl speaks to me.

The cloud was a good idea 20 years ago, too. But only recently has it become
the focus it should have been, and in a way that isn't quite as satisfying as
MIT and CMU were attempting in the 90s. It really was a spectacular time. VR
in research labs was, then, ahead of where it is, now, in most respects;
certainly for things like molecular dynamics, we viewed it as a given that
you'd want to wander around and poke the computed empirical force field for a
molecule. I guess the up side is that it should be a lot faster and easier to
catch up given the ubiquitous and powerful CPUs of today -- a Raspberry Pi is
more powerful than my old SGI Indigo ever was. Just a matter of will and
intent if society wants to recover and surpass the progress we made back when,
and democratize it much further than it was.

~~~
teh_klev
Everything you wrote was quite interesting up until the last sentence.
Seriously do we really need this tired old OS flamebait crap here?

Edit: @apathy gracefully edited out the offending last sentence so my comment
above is no longer valid, however I'll leave here to provide context for the
replies below.

~~~
apathy
Probably not. However, an OS is a tool, and your tools inevitably shape the
way you view what you are working on. I'm going to revise my comment to tone
it down, but the fact remains that network-native multiuser OSes (Unix, Plan9)
encourage a very different view of the world than desktop-centric single-user
OSes (Windows, in particular). That's not really something that needs further
explanation if you've seriously used both, and the differences were much, much
starker in the late 90s.

------
kim0
If I were implementing such setup today, would AFS still be my best option?

