
Why I do not want to work at Google (2011) - luu
https://www.mail-archive.com/kragen-tol@canonical.org/msg00268.html
======
Htsthbjig
"and the evidence suggests that it is to that that we owe the collapse of
oppressive regimes throughout the Middle East and Northern Africa"

Never mind that those oppressive regimes got replaced by...new oppressive
regimes.

I beg to differ that is Internet what makes a regime fall in North Africa.
What makes regimes fall is people who can't buy food going from 30% to 75%
because of inflation of commodities(created by OUR WESTERN central banks).

Probably you think revolution is great, but I have been in Libya and Syria,
before and in the wars and it is horrible. A civil war is the worst thing that
could happen to a country, Americans could idealize it as they have forgotten
what a real war is(fighting against non developed enemies 8000 miles away is
not alike seeing your home in flames, your daughter raped or your brothers
killed).

I would love to use decentralized tools, but they are so bad. They have lots
of features, but they are incredible hard to use.

People will start using decentralized products when they are as easy to
configure and install like a mac. Only centralized tools like facebook or
Google provide easy of use that my grandpa could use.

~~~
wtbob
> Americans could idealize it as they have forgotten what a real war
> is(fighting against non developed enemies 8000 miles away is not alike
> seeing your home in flames, your daughter raped or your brothers killed).

Southerners remember what that was like: 150 years ago Yankees were raping and
killing their way across the Confederacy. Young folks nowadays have forgotten,
but there are plenty of folks still alive who heard about it from their
elders. It really wasn't that long ago.

~~~
fapjacks
No. Echoes of memories from your great-grandparents' great-grandparents is not
even slightly on the same side of the room (or even in the same building) as
living through an actual war on your own turf, civil or otherwise. I spent
many years in a few different warzones, and you cannot possibly imagine what
life is like for those folks. There is nothing in Western civilization that
can convey these realities of war -- not even our own brand of tyranny or the
relatively brief and cursory exposure of the vast majority our soldiers to
foreign warzones. In a way, this attitude is almost offensive -- no offense to
you personally -- because there is inherent disregard for the huge number of
human beings living in awful shitholes where the human animal reigns.

------
mszyndel
The problem is, all of those guys who were so free in the nineties failed to
create any noteworthy online services so Facebook, Google, etc. took up the
space.

I'm gonna get down voted for that, but maybe if there were less nerd-wars in
FOSS community and more thinking about real users the Internet would look
differently nowadays.

~~~
kragen
I don't think it's true that we failed to create any noteworthy online
services.

 _We built the internet_. Maybe you've heard of it. We also created TCP, UDP,
IP, DNS, email, the Web, Usenet, IRC, Git, BitTorrent, Tor, Bitcoin, and
Wikipedia. We killed AOL, CompuServe, Encyclopedia Britannica, Solaris, the
Information Superhighway, and the Advanced Intelligent Network. We were only
unable to do smartphones because the carriers ruthlessly shut us out,
demanding insane amounts of control over handset software and using their
regulatory capture of the FCC as leverage, until Apple forced the doors open
for us — but on Apple's terms. And we built most of the software that runs
Apple, Facebook, and Google, too.

But how can we build new services to replace Facebook and Google on a
decentralized basis, like email and the Web? That's a problem with both
technical aspects — how can you build a distributed full-text query processor
that runs on the machines of volunteers? — and social/business aspects — how
can the people who benefit from these services effectively collaborate to get
them created and improved? (Kickstarter and the like show a very promising
direction for this.)

Those are the problems I want to be working on, not how to persuade Google
Drive users to entrust a well-intentioned but unaccountable central authority
with all of their family photos.

~~~
mszyndel
I;m not diminishing achievements you mentioned and I'm very well aware of
them. You must admit however that not all of them can be attributed to FOSS,
and all of them are quite low-level compared to what average user needs to be
able to use it effectively.

I just wonder if we need 10 major Linux distros none of which is easy to set
up and use to an average user. (sorry, even Ubuntu is still not there)

I am a big believer in open web and it makes me cringe every time I see a new
cool technology that uses closed protocols (or rather APIs) as it happens with
IoT right now. But it's hard for me to attribute it to anything else than lack
of leadership from FOSS community.

~~~
worklogin
Ubuntu is pretty much there. The only reason Ubuntu can't do some things is
because of the requirement of Flash and DRM, both of which are obsolete or
overbearing technologies we should avoid.

>it's hard for me to attribute it to anything else than lack of leadership
from FOSS community.

So when it's a consortium of multi-billion dollar tech companies with massive
R&D budgets, political connections and marketing dollars, it's FOSS's fault.
OK.

~~~
mszyndel
Hey, what I just said is - spend less time on bs and more on creating valuable
tools to everyday users and world will be a better place. It's not about
fault, it's about lack of impact. Imagine how cool place would this world be
if Linux was major customer OS and open source languages would be major ones
used to create customer software with open protocols. Call me idealist but I
would totally sacrifice half of the Linux distros to achieve those goals... :)

------
joev_
> Apple wants to relegate websites to second-class status on their popular
> computers, and exercises viewpoint censorship on what “apps” they allow in
> their “app store”.

I don't remember it that way all. Does anyone else? From what I remember,
there was a very heavy consumer demand for the App store, meanwhile Apple was
telling everyone just to make web apps. They actively developed WebKit into a
cutting-edge, standards-oriented, developer-friendly browser. I don't see how
you could say they wanted to "relegate websites to second-class status".

~~~
sneak
Websites can't send notifications on iOS, for one. (They can on Mavericks and
Yosemite.)

~~~
0942v8653
That's a design/ideology conflict and not something that can be fixed. On OS
X, you can leave a webpage open, and it must be open in order for the
notifications to be sent. On iOS, you cannot (terminated whenever there is
memory pressure). Push notifications are just way beyond what a website should
be able to do, and scheduled ones aren't really all that useful except for a
handful of apps.

~~~
parisidau
Web pages do not have to be open to send notifications on OS X.

------
byoogle
> Google [will] delete your account with no recourse if you admit you’re only
> 10.

The US gov is mostly responsible here, not Google:
[http://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pro...](http://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Protection_Act)

~~~
kragen
Keeping your account on somebody else's server is mostly responsible, because
that makes it vulnerable to nonsense like this. Google is very much oriented
toward everyone keeping their accounts on somebody else's servers, ideally
Google's. That's what I object to.

~~~
TheSpiceIsLife
I generally try to avoid starting sentences like this, but here goes:

No. Just no. 10 year olds probably don't have the resources to run their own
hardware, insofar as 10 year olds don't earn money, can't own property (in a
legal sense), and are not wholly responsible for their own actions. Therefore,
a 10 year old having an account with a 3rd party unbeknownst to the parents is
a security / child protection / legal nightmare.

Saying "keeping your account on somebody else's server is mostly responsible"
is disingenuous _because_ the 10 year old has to keep their account on
_somebody else 's_ server.

Edit: clarity

~~~
kragen
I am disappointed in the extreme level of rudeness in your response. I note
that it seems to be occasioned by your not having been able to understand my
message, with the result that the person whose dishonesty you are attacking
exists only inside your own head.

10-year-olds can keep their data on their own server as easily as they can
keep their books in their own bedroom in their own house, sleep in their own
bed, call their parents on their own cellphone, and edit videos on their own
computer. While the 10-year-old may not have legal title to any real estate,
it is not therefore necessary for them to live in an institution. They can
live with their family.

Hopefully that clarifies my message for anyone else who may have misunderstood
it.

~~~
TheSpiceIsLife
Who's responsible for the actions and security of a minor and their property?
Can a minor enter in to contracts for services to connect the physical
infrastructure to the outside world? Can a minor hold insurance policies
against loss or damage to 3rd parties? Should a minor be tasked with the
responsibility of keeping their belongings and data safe?

> They can live with their family.

Or legal guardian. That's my point. Children probably do need protecting, so
their guardians are tasked, legally, with those responsibilities. Therefore,
ownership, security, consequences of actions, etc., aren't clear cut when it
comes to children.

------
TheSpiceIsLife
Given the article is from 2011, but did this:

> and the evidence suggests that it is to that that we owe the collapse of
> oppressive regimes throughout the Middle East and Northern Africa

actually happen? I mean to say: Sure, oppressive regimes collapsed, but to be
replaced with what? Take the 'Arab Spring' countries Tunisia, Libya, Egypt,
and Yemen - are they better off for having traded security and stability for
(attempted) democracy.

> Google, of course, wants to solve these problems too. But it has a
> different, less-democratic approach in mind.

No company is a Democracy in the sense of dēmos 'the people' \+ kratia 'power,
rule', where 'the people' are you, me and the next person. Maybe companies are
democracies where 'the people' are the shareholders, at least to some extent,
for some definitions of 'shareholder'.

Often we see "The Democratisation of x" bandied about as though it has to be a
good thing, but I'm left wondering what the term actually means if the
consequences are typically a tradeoff between security + stability vs.
democracy.

From what I've seen the media has portrayed Twitter, Facebook, YouTube (etc.)
as playing a role in the Arab Spring demonstrations, yet these are exactly the
sort of 'unaccountable intermediary' you're railing against.

I think the reference to the Arab Spring without qualifying the consequences
of the whole scenario from the vantage point of history is dishonest.

Democracy isn't synonymous with Security and Stability, and we should all
probably have a long hard think about Security and Stability before we promote
disruption.

(Edit: fixed a thing where I'd written the opposite of what I meant).

~~~
kragen
I wrote the article you're calling "dishonest" in 2011. The Syrian civil war
was still mostly Assad bombing his citizens. It would not have been possible
for me to "qualify[] the consequences of the whole scenario from the vantage
point of history" in a document written before that history happened. I agree
that Egypt, Libya, and Yemen (and also Syria and Bahrain) are worse off now
than before the Arab Spring.

I agree that companies aren't democratic. I think that's a good reason not to
hand our government to companies.

Historically I don't think there is a tradeoff between security and stability
versus democracy. [http://slatestarcodex.com/2013/10/20/the-anti-reactionary-
fa...](http://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/) goes
into some detail about the unstable, insecure history of undemocratic
governments, specifically to rebut neoreactionaries who are calling for an end
to democracy in order to return to an illusory imagined past of security and
stability.

Democracies usually don't make very good decisions. They do seem to make many
fewer catastrophic decisions than non-democracies, though.

------
nowarninglabel
Previous discussion from the last time this was posted:
[https://news.ycombinator.com/item?id=2933619](https://news.ycombinator.com/item?id=2933619)

~~~
kragen
Much of that discussion was excellent, although some of it missed the point.
I'll try to keep tabs on this thread and answer any questions people may have
today.

------
Patrick_Devine
My startup has been trying to make decentralized computing easy, but I'm just
not sure there's a huge market for it. We took OpenStack and made it so you
could connect nodes from anywhere on the net and rent them out, whether it was
a server in a datacenter or even someone's desktop machine.

I'm just not sure there's a huge demand for it though; originally we were
thinking that people would care about price, but it turns out most companies
don't care that they're being fleeced by AWS. If they do care, they're
probably using DO.

Anyway, here's a demo of everything working:
[http://youtu.be/998IYD_WomY](http://youtu.be/998IYD_WomY) It connects a bunch
of desktop machines around the Bay Area on different networks (Comcast,
Sonic.net) and blends them in with a "dataceter grade" cloud server.

The real problem with decentralization though is that the asymmetric nature of
the first (last?) mile makes it really hard to do anything useful with
individual nodes. If we all had symmetrical gigE FttH/FttP everything would be
a lot more rosy, but so far that's really only been happening in a few places.

~~~
kragen
I'm not sure that's "decentralization" in the sense that I mean. A
"decentralized" system, to me, is one where no single person or group of
people can deny others access to it, for example by deleting their accounts or
turning the system off. Decentralized systems allow distributed innovation
more robustly than centralized or merely distributed systems, and so they tend
to outcompete centralized and distributed systems when they clash
economically.

I haven't watched your video, though.

~~~
Patrick_Devine
Yes, it's only part of a solution. It's a decentralized platform, and what
you're asking for are truly decentralized applications. The platform can
potentially enable decentralized applications while letting you manage them in
a centralized way.

I'm just not sure about your economics argument though; convenience usually
trumps any kind of economic reason for using a distributed system over a
centralized one. This is particularly so with most internet services because
the cost for most things always approaches free. I mean, we're reading HN on a
website, and not through alt.hackernews, right?

------
mrb
(Full disclosure: I work for Google. My opinion does not necessarily represent
my employer's opinion, yadi yada.)

This guy's rant is really about the need for decentralization. I fully agree
with him. But I disagree that decentralization is incompatible with Google's
primary business model (regardless if the Internet is decentralized, there
will always be ways to do advertising.) In fact Google Wave
([http://en.wikipedia.org/wiki/Google_Wave](http://en.wikipedia.org/wiki/Google_Wave))
was a fantastic and radical attempt at decentralizing common use cases such as
email, instant messaging, social networking, etc. Unfortunately this project
failed to gain traction due to various reasons.

To return to a more decentralized Internet, we need software and network
protocols that let people easily host their mail and blog, publish their
social pages, and share their vacation pictures, etc, without relying on _the
cloud_ but doing it via a device that runs _at home_. An ideal place to run
this software would be your Internet router, as it really is a full-blown
computer that is always on, always connected. And, as the router, it
conveniently bypasses the issue of masquerading/NATing which is the one reason
why non-technical people do not run server software more often. Another
advantage is that uploading stuff to your Internet router (eg. sharing
pictures) is much, much faster than uploading them to a cloud service (Wifi or
Ethernet bandwidth can be 50x-1000x faster than the typical upload bandwidth
of a home Internet connection.)

It should all work out of the box with zero configuration. That is the only
way the idea can gain traction. Not everybody is a sysadmin, so your
grandmother should be able to make it work. Want to enable your mailbox? Just
tick the appropriate checkbox on your Internet router as easily as you would
sign up for some mail provider. Want to follow the social lives of your
friends? Your browser can render your custom Facebook-style wall by pulling
posts and pictures feeds directly from your friends' Internet routers.

We already have most of the technologies needed to implement such
decentralized features: HTTP with cross-origin resource sharing, SMTP,
automatic registration of DNS names to make you discoverable on the net,
OpenID authentication, etc. Maybe some other needed bits can be pulled from
the Wave protocol.

But the sad thing is that I am not aware of any attempt to implement any of
what I described above. Perhaps it is a vision too ahead of its time. Or
perhaps it is because there are 2 important problems that are very hard to
solve in a fully decentralized way: (1) search, and (2) spam. You can easily
search for and find your friend's blog via Google web search because it has an
index of the entire web, but how do you provide this level of quality if the
search engine is your Internet router? As to spam, you can easily filter email
if you are the Gmail team running sophisticated analysis on the billions of
emails processed daily because the more data you have the easier you can
classify it, but how do you implement this filtering quality on an Internet
router that does not have access to such a data set?

Perhaps the solution is to run most of the services in a decentralized way
(email, instant messaging, social networking, etc) _while_ at the same time
relying on a few central services for some features like search and spam
filtering.

Edit: thanks for the pointers to FreedomBox and Sandstorm, I will look into
them.

~~~
fidotron
Git provides most of the infrastructure answers. I think the idea that someone
would host a personal website in future is slightly backwards. Just clone
their website's public repo and browse locally. This could be extended to
social news feeds, where if you just put new content in your repo then other
people that subscribe could see those updates mashed together into a single
feed.

The problem here is the economic incentives to do it are low, but the
technical barriers to solving this are genuinely getting lower and lower all
the time, so at some point it will happen.

~~~
stephenr
So to view your personal website, on my phone, I need to clone an entire Git
repo, with the entire history of the entire site..

Some people just treat some technologies as a "this can fix anything" when
really they're well suited to one task.

This is the sort of thinking that suggests using a Git based solution to sync
binary files between computers - the thing even the biggest Git fanboy will
admit it does poorly - handling of binary files - and someone wants to use it
for that specific purpose.

~~~
kragen
You could make a shallow clone with just the objects you need for the front
page. If we're taking Git literally here, you need to get the current head
somehow, then fetch the commit object, then fetch the root directory object it
points to, then fetch the index.html blob object it points to. If we're not
using HTTP URLs to name the objects, and the objects in question are small, it
would be pretty easy to configure the server to "optimistically" bundle the
commit, the directory, and the blob into the response to the request for the
head, so it could be a single RTT.

It might actually be more efficient to do a single-version shallow clone than
to fetch all the relevant assets with separate HTTP requests. Git already
supports shallow clones, and you can already use them to clone things like
Github wikis, and now you can even push from them:
[https://stackoverflow.com/questions/6941889/is-git-clone-
dep...](https://stackoverflow.com/questions/6941889/is-git-clone-
depth-1-shallow-clone-more-useful-than-it-makes-out). A --depth 1 clone of
[https://github.com/kragen/500lines](https://github.com/kragen/500lines) is 7
megabytes, although the full history is only barely larger. My phone has 400+
megabytes of RAM.

More promisingly, though, Git blobs are identified by their hashes. That means
that if both my home page and your home page use Twitter Bootstrap 2.3.2, your
phone only needs to download it once to render both home pages, since it can
see from the blob IDs in the Git directory that it already has the relevant
blobs. (Assuming it hasn't been optimistically included in the initial message
response!) It doesn't have to worry that one of us might be serving up a
maliciously-backdoored version in order to steal users' login cookies on the
other's site. By the same token, you can accept the blobs from anyone who has
a copy, who might be closer to you on the network than the origin server.
Especially if, like me, you're in Argentina with a 250ms ping time to the US.

(This assumes you're using something cleverer than the standard git clone
protocol, which builds a thin pack including all the objects you lack.)

So git-cloning websites to view them could actually be _faster_ than loading
them over HTTP the way we do it now, because it eliminates a lot of the
unnecessary security issues that we work around with bandwidth duplication and
high latency.

~~~
stephenr
You're kinda proving my point for me there.

The _only_ upside you actually identified there, is a reduced number of
requests, which is already possible by using HTTP Keep Alive and HTTP
Pipelining.

~~~
kragen
I was just explaining why the downsides you were writing about weren't
serious. I didn't devote much effort to explaining the upside. I guess that a
lot of things that seem obvious to me aren't obvious to you, probably because
you haven't spent much time thinking about them. I'll try to do better:

1\. You can use 10MB of widely-used JavaScript and stock icons on your web
page, and the browser will be able to see the page after downloading 100K
instead of 10MB. HTTP keepalive and pipelining don't help at all with that.

2\. There are no more broken links, so people can link to stuff that isn't
hosted on their own server without fear that it will go away next year, or be
redirected to more commercially remunerative content, such as linkspam. (This
is conditional on them caching a copy on their own server, of course.) HTTP
keepalive and pipelining don't help at all with that.

3\. Resources can be cached close to you on the network without you having to
trust the cache not to feed you corrupted versions of them. (You do have to
trust the cache to not report you to the feds for reading an article about how
to download Interstellar.) This makes a huge difference in page load time by
reducing latency. Again, while HTTP keepalive and pipelining can reduce the
multiple of latency, they can't reduce the latency below 2×RTT to the origin
server, and practically speaking they're limited to several times that. This
is especially important if you're trying to host your personal pages on a
high-latency residential internet connection, and/or browsing from a high-
latency country.

4\. Making every website not just archivable but also forkable enables a new
kind of lightweight collaboration. Well, not totally new — I mean, we're doing
it now on GitHub and, in a different form, on Wikipedia.

5\. Naming web pages by hash rather than by IP address or by a DNS name that
maps to an IP address makes it easy to host them on dynamic IP addresses,
including with transparent failover when one of them goes down for a while.

~~~
stephenr
CDN's already solve #1 and #3

#2 is no different than mirroring content, e.g. how fireballed.org works, but
over longer time.

#4 collaboration is about working on something together. Every visitor to your
site is unlikely to need/want/care about collaborating on it.

#5 You're suggesting an SHA1 hash is easier to remember than a domain name?
Seriously?

~~~
kragen
This "X is no different than Y/X already solves Y" conversation reminds me of
thinking, around 1992, that clicking on a link to a page on another web site
was "no different than" connecting my FTP client to a new FTP server, logging
in, cding to some directory, and downloading and viewing a file from that
directory. In some abstract sense, yes, they're the same thing; but in terms
of human experience, they're very different.

You ask, "You're suggesting an SHA1 hash is easier to remember than a domain
name? Seriously?" Consider that if I seem to be saying something that's
obviously ridiculous, even to you, then maybe you've misunderstood what I was
saying, as in this case. I'm going to give you the benefit of the doubt and
assume that your lack of understanding is genuine, not feigned to give you an
excuse to be rude, but you were rude anyway.

~~~
stephenr
You suggested Git based web page hosting would allow for local mirrors of
content so they are available regardless of the availability of the original
site.

I told you that is already happening in practice, without the overhead of Git.

in your example, i would not agree those two things are the same. Clicking a
link that goes to a [http://](http://) url and clicking a link that goes to
the same file via ftp:// i would say are _reasonably similar_ for the end
user.

you're right, i didn't read into #5 enough. so for the purposes of "easier
dynamic ip hosting" you're suggesting that browsing to a sha1 hash means it
can come from any computer that has a copy of it?

but nobody is going to remember hashes. so you need to use dns or similar, so
say "foo.com" has a GIT record, the value of which is an sha1 hash.. but then
the client needs to find somewhere to get that from... so presumably you're
suggesting some kind of peer to peer system for hosting?

and all of that is somehow better than the existing model, where you simply
have a DNS server that has an api to update the IP address of an A record
quickly?

You've also skipped over a big issue here.. this whole concept relies on a
website being nothing more than static files.

It also effectively removes the ability for the author to have any idea at all
about how many people view his site.

Git is reasonably good at what it does. It has some flaws but for versioning
text based content, and fostering collaborative working amongst
developers/authors, its a reasonably good solution.

As a replacement for HTTP web pages, caching proxies, CDNs, parts of DNS? not
so much.

------
Immortalin
One reason why companies would rather get fleeced by aws etc. is that hosting
your own server is both troublesome and expensive. This isnt much of a problem
if you live in the temperate region but it is a HUGE problem if you live near
the equator. Temperator can easily rise up to 33 degrees centigrade and beyond
and maintaining a server without round-the-clock cooling in an air-conditioned
room is close to impossible(unless your server is a single raspberry pi).
That's why people would rather pay for Iaas services, sometimes it is simply
cheaper for someone else to host it.

------
sgt101
A serious point here: warehouse scale computing beats p2p in a dsl (or even
feasible fttp) environments. Essentially the model of the internet we have is
broadcast with decoration from the upstream channel.

To change this services using upstream that make money for the pipe providers
are required. Otherwise, no upstream.

Freeloading cannot work. Repeat CANNOT. Because you need fellas with
highschool educations to look after the fibre and the boxes.

Downstream will remove the boxes and put them into warehouse scale facilities.

So - Haxxors, get yourselves to the internet of things if you wish to see
democracy preserved.

Freedom is in peril.

Defend it with all your might.

~~~
zAy0LfpBZLC8mAC
Why do you think that for FTTP? I do see the problem with ADSL as it is
currently implemented (as analyzed very well by kragen in the mail linked to
in a sibling post to this), but why would the same problem apply to FTTP?

Also, I don't get how you get to freeloading - upstream bandwidth is as much
part of the product that the customer is paying for as the downstream
bandwidth is.

~~~
sgt101
Downstream bandwidth is mostly delivered using caches. Upstream bandwidth is
not delivered using caches. So to provide the backhaul for upstream bandwidth
we will do deep reach into the datacentre <-> home and interconnect in the
data centre. Therefore fttp will not provide p2p connectivity unless there is
a need for local connectivity because deepreach and datacentre networking is
rendered non viable by sheer traffic volume. But the traffic will need to be
geographically localised and/or the services will have to provide the economic
support for backhaul uplift.

I think we can probably do petabits on single mode fibre bundles now with the
right boxes but the trick is the right boxes, which are spensive and take lots
of electricity to run. So even if you can pony up for an FTTP roll out then
the use cases are going to be constrained by the network architecture that is
used to support it.

Now, for most people this is not a problem as the up channel is basically
irrelevant in terms of bandwidth in the current consumer internet, but if you
are the sort of person who imagines a decentralised and democratic consumer
internet which is not mediated by massive supernational companies, or if you
are the sort of person who is interested in where the value is captured in the
industry value chain I think this does matter.

Sharing other peoples content will not fund a democratic internet even if
governments fund and regulate fttp. We need a new class of applications that
generate revenue sufficient to pay for the infrastructure that will facilitate
them. If we get that then the infrastructure will also facilitate a shift away
from the data centre (not it's elimination though as it will be the way that
the current usecases are delivered)

If you have any ideas please write them up - before the industry dies!

~~~
zAy0LfpBZLC8mAC
I'm sorry, but I still don't get what you are trying to say.

For one, caches are not exactly located on customer premises, either, right?
They tend to be located in places that have easy access to power and are well-
connected with fibres, and so far transmission speeds seem to be mostly going
up.

Also, well, yes, social networks tend to be geographically localised, so I
would very much expect p2p application traffic to also be geographically
localized!?

Widely consumed content that's distributed in a p2p fashion obviously is just
as amenable to caching as widely consumed content that's distributed by a
central service, if it's distributed through some kind of content-addressed
network. If that's cheaper for the ISP, they could just put caching nodes into
their data centers.

Finally, I completely don't get why you think my communication should
"generate revenue". My telephone calls don't generate any revenue either, do
they? I simply pay someone for moving my bits around, that should be
sufficient motivation for them to take care of moving my bits around.

~~~
sgt101
I think that caches work where you have a vast imbalance of requests for a
particular bit of content - they seem to me to define centralization.

In terms of generate revenue - I mean provide a service that people are
willing to pay for at a level that will fund a network that enables the use
case and decentralization. Current services are tending to deep reach from the
datacentre, because that's cheap. If all services that people are willing to
pay for are suited to that model that's what everyone will get. To have a
different infrastructure model you need a service that people will pay for
that is not well served by deepreach from the datacentre.

It's not the moving of the bits that is at issue, it's the how the bits are
moved, where and who by.

~~~
zAy0LfpBZLC8mAC
"I think that caches work where you have a vast imbalance of requests for a
particular bit of content - they seem to me to define centralization."

Hu? Caches for geographical distribution are actually a form of
decentralization! The problem with the existing ones is that they are mostly
under centralized control, but there is no technical reason why that has to
be, and an ISP's caching web proxy (which weren't that uncommon back in the
day) is a simple example of caching infrastructure under decentralized control
that could help an ISP reducing the load on their upstream connection for
often-requested content. You don't need youtube to copy a video from your
machine to the cache at some viewer's ISP, that's just the way it is commonly
done right now because in that way youtube can make money - or, if you want to
decentralize further, you could copy to a cache on the machine of some
customer of said ISP in some geographic region: The more requests there are
for some content, the more likely it is that there is a copy in your
neighbourhood that doesn't need to be transferred from the original source.

------
notacoward
Maybe if Google actually open-sourced as much software as they consume, that
would offer a way out. Some people would still want to run the centralized as-
a-service tools because it's on a huge professionally-run infrastructure and
because of network effects. Others might want to run the same services
locally, to support privacy/decentralization goals like the OP's. Google's
main revenue stream (ads) shouldn't be affected, and they'd even benefit from
outside contributions.

So why don't they do this? Because of the difference between "shouldn't" and
"wouldn't" at the end of the last paragraph. What if code to support ads - or
other invasions of privacy - were embedded into every other thing they might
open source? Then the sheer difficulty of refactoring to "sanitize" everything
else would probably create a sufficient barrier to opening it up, and
revealing how all of that ad infrastructure really works might hurt their
business in other ways.

When Google continues to run as the new AOL (closed source, walled garden) I
don't think it's part of a deliberate philosophy. Nor is it an accident. It's
because their continued uber-prosperity, if not their actual survival, depends
on it.

~~~
kragen
I don't think it has to be that extreme. They do have a corporate culture of
extreme secrecy, which I think comes from growing up in the shadow of
Microsoft, Yahoo, and other much bigger companies. But I don't think their
survival or even prosperity depends on secrecy any more. However, it's deeply
embedded in their corporate culture, and I think it's actually spreading to
other Silicon Valley companies.

There _is_ an issue of incentives, but I don't think it's as strong as you
think. I think it's just that running services makes them money, while
releasing open-source software doesn't, so they devote lots of effort to
running services, and comparatively little to releasing open-source software.

It would be very challenging for them to open-source "as much software as they
consume." That would be a many-billions-of-dollars effort. But it wouldn't be
necessary — the great thing about software is that you can copy it, so even if
you only release a tiny fraction of the amount of software as you "consume",
everyone still benefits.

------
steven777400
Maybe tag with (2011)? It's possible some of the institutional issues have
changed in the past couple years.

~~~
kragen
Well, they have changed, but they've changed for the worse. Google has
gradually been succumbing to the pressures I identified in that post; witness
the move of Android to put gradually more services into proprietary Google
Play Services, the ban on disconnect.me, the years-long struggle over
pseudonymity on Google+ (finally resolved for the best), the ugly debasement
of search results to drive traffic to Google+, the ban on non-Chrome browsers
accessing Hangouts, the abandonment of XMPP compatibility by deceptively
pressuring Google Chat users to move to Hangouts instead, and so on.

These are by no means crimes, but they're more evil than the things Google
would have done in 2008 or 2009, and they're certainly not moving the internet
in the direction I want to see it move in.

~~~
abraham
> the ban on non-Chrome browsers accessing Hangouts

What do you mean by this? The documentation lists support for Chrome, IE,
Firefox, and Safari.

[https://support.google.com/plus/answer/1216376](https://support.google.com/plus/answer/1216376)

~~~
kragen
I haven't tried using Hangouts lately, so maybe it's been fixed. It got broken
back in August; [http://robert.ocallahan.org/2014/08/choose-firefox-now-or-
la...](http://robert.ocallahan.org/2014/08/choose-firefox-now-or-later-you-
wont.html?showComment=1407817453520#c5518384044027599177) explains:

"I can understand asking why Hangouts doesn't work in Firefox. In short, we
wanted to transition to WebRTC sooner rather than later, and at the moment
there are things holding us back on both our side (e.g. upgrading our ICE
implementation) and the Firefox side (e.g. supporting multiple video streams).

"But overall, surely you're not arguing that transitioning a major Google
application from a proprietary plugin to an open web standard somehow
demonstrates that Chrome doesn't value web standards."

~~~
azakai
It doesn't look like it was fixed. I tried:

1\. Go to [http://www.google.com/hangouts/](http://www.google.com/hangouts/)

2\. Click "get hangouts"

3\. Click "computers"

Result is "You'll need to download Chrome before installing the Hangouts
Chrome extension. Do you want to download Chrome now?"

edit: I think i remember being told on twitter that this was a confusing UI
that would be changed. But that was several months ago.

~~~
magicalist
I think the "confusing UI" thing becomes apparent if you click and try it. By
hangouts on "computers" they mean as a desktop-installed app (which is just a
wrapper around a Chrome app). I don't know why you'd want that (basically just
a bookmark on your desktop?) but that's what it is.

If, however, you just start a hangout inside of gmail, it works fine in
Chrome, Firefox, Safari, and IE (just hung out with myself on two computers in
multiple browsers to test that out) and there was never a time that was
broken, contrary to the GP post.

~~~
azakai
Well, when you click "get hangouts" the options are "Android, iOS or
Computers". So anyone that wants to run it on anything but a smartphone gets
told that it can only run on Chrome. I don't see it mention anything about an
app as opposed to running in a browser.

Perhaps this is just a confusing UI, but if so, it would be very easy to fix.
If that's the case, I'm puzzled why it hasn't been fixed.

~~~
magicalist
yeah, I had never visited that site until that discussion was making the
rounds on twitter, so, no clue. I could see an argument for "this is a site
where you can install hangouts as an app". There's nothing particularly wrong
with using Chrome as a runtime (imagine if Mozilla's old Prism project had
stuck around and people made "native" apps with it...requiring Firefox to run
it would be just that: a requirement to run the app because that's how Prism
worked), but if your motivation was to get users to use hangouts, I don't know
why you wouldn't also mention that you probably already have it if you use
gmail, so just open it up in there.

My only claim is that if you open up a chat window in gmail and hit the video
button it works in other browsers and that using hangouts that way was never
broken.

~~~
azakai
If so, I wonder why the site doesn't just say "to run Hangouts on a computer,
just load a hangout in your browser" (possibly with a link to the right
place). It seems such an easy thing to fix, and it was pointed out publicly,
yet no change has been made.

It seems unlikely to me that the assumption is that people that pick
"computer" over "smartphone" distinctly want a native app as opposed to just
running hangouts in the easiest way possible. All they did was click on "get
hangouts".

------
rwallace
Bloody hell, talk about rose-colored glasses.

No, back in the nineties, you could _not_ do any of the things he talks about.
Not unless you were a high-status employee of a handful of major corporations
or universities. Most people couldn't get online at all, and those who could
were lucky to have client-only access to e-mail and the web.

Nowadays, anyone with a credit card and an Amazon account can set up an
always-online server running pretty much any software they choose. The
Internet is far more decentralized, by the criteria he invokes, then it was
back then.

~~~
kragen
Your experience may have been limited, although I am not going to claim that
mine was typical.

As I explained in the article, I did the things I talked about (running online
services, including email and web pages, accessible from the entire internet)
with a US$20/month dialup internet account on an ISP run by a group of MUDders
who ran the ISP as a way to play MUDs, starting in 1997. I installed my first
web server (from EIT's Webmaster's Starter Kit) temporarily on an IRIX
workstation in a university computer lab, in 1994. I was an undergraduate
student at UNM, a state university in the second-poorest state in the US,
although I admit UNM was unusually progressive. The year before, I was an
unpaid student system administration intern, with a user account on all the
Suns at the math department, assigned tasks like "please get tvtwm to compile
on SunOS 4.1.4; Prof X wants to use it".

I agree that it's pretty awesome that anyone who wants to pay Amazon or
DigitalOcean US$5 a month (and can get access to a credit card they'll accept)
can run any internet-facing server software they want. Unless they're
WikiLeaks, say. Or infringing on an invalid Amazon software patent. Or just
want to not have random employees have hardware access to their machine for
reasons of security and privacy.

So I don't think the internet is far more decentralized. It's bigger, which is
pretty great, and it's a lot cheaper, which is even better, but it's also more
centralized.

------
pmoriarty
Because Google makes its money by spying on its users.

It's also an advertising company, which means it brainwashes and deceives
people in to buying garbage.

Both activities are unethical, and I wouldn't want anything to do with ether.

------
sneak
> My friends Len Sassaman (who committed suicide in the first few days of
> July), Bram Cohen, Jacob Appelbaum, and Zooko O’Whielacronx have made
> substantial contributions.

One of these things is not like the others. Three of these people have made
significant contributions, in code. One of them has talked a lot and claimed
the work of others as their own.

