Hacker News new | past | comments | ask | show | jobs | submit login
Tim Berners-Lee wins Turing Award (mit.edu)
1735 points by melqdusy on Apr 4, 2017 | hide | past | web | favorite | 287 comments

Key quote (for me) from the interview:

"A social network is disempowering because you put a lot of energy into it, all your personal data out there, and tell it who your friends are. You can only use that information inside the silo of that particular social network."

source: https://www.technologyreview.com/s/604052/webs-inventor-tim-...

Tim's vision for a Giant Global Graph (2007)


and something we tried to do as well: http://yansh.github.io/articles/moana/

Yeah. I wish we had something more decentralised, like e-mail, but for social networks. Like, a standardized social network protocol that most people use (popularity is important here). There is nothing special about what social networks like Facebook do, the biggest challenge is probably serving so many requests and storing lots of data with high availability.

As a matter of fact, this is what Sir Berners Lee is working on right now at MIT.


The issue is not storing the data, just like "email" doesn't store data. It's just a protocol and a data format. This makes it interoperable across whichever industry player wants to spring up and compete for your service.


Don't like gmail? Go to ProtonMail. And you don't lose the ability to interact with people who do use Gmail.

Don't like Facebook? Go to Ello. But now you've lost your entire network.

The decentralized web will be built on data formats and protocols that allow you to take your data with you and force companies to compete with the quality of their service, not the size of their network.

But half the point of a social network isn't the ability to communicate, but rather the history you've built there. Likewise, half the value of email isn't the ability to send a message to anyone, but the history of emails you have. Sure email isn't a data storage method, but in reality? Yeah, email is a data storage method. That's why Gmail gives me 25GB for free. Not because that's the size of my inbox, but because that's the size of every email I've ever gotten.

If I move from Gmail to Outlook, I can still email my friends but I can't go back and search our old conversations unless I export everything from Gmail to Outlook. If I switch from Facebook to Instagram, I can still message my friends but I can't see all of the pictures I posted on Facebook unless I export the pictures from Facebook to Instagram, but even then there are a lot of features of Facebook that I can't export data from because they just don't exist on Instagram.

History creates lock-in, and history is really hard to move around.

Gmail supports the imap and pop3 protocols, which allow all history to be trivially imported somewhere else.

That's the importance of open protocols.

History is crazy easy to move; any difficulty we have moving history is intentionally caused by the companies with a vested interest in keeping you from moving that history.

History should be decentralized too, then.

Who remembers RSS feeds? I miss those days. We should have some superior spin on that but for social networks.

Imagine Facebook, but your history/data were your own property. A friend would find you on Facebook, or Ello, or AnythingAnyWhereBook ... add your feed, and done. Then it is up to the Social Networks to make a superior product for you to enjoy your friends data and interaction.

Right now it's about the monopolisation of our data.

Zuckerberg. If you're reading. Your current model is bullshit.

>Who remembers RSS feeds? I miss those days. We should have some superior spin on that but for social networks.

I think a big missed opportunity for Google on social was the failure to cultivate Google Reader into an open social network (it already had social features), and build each aspect of it on an RSS/OPML-like structure.

What do you miss about RSS feeds? They're still there... It's very rare that I come across a blog that doesn't have one, even now...

You're right, of course. It's possible I'm still pining for my perfectly set-up Google Reader account which I cannot quite replicate. That still hurts! RSS felt ubiquitous (to me) back then.

Anyway, to the point, try using RSS with facebook in the above context.

I hate being locked in - hence I gave up on FB. It's trash to me until it solves this issue.

> Anyway, to the point, try using RSS with facebook in the above context.

For me they're different things. RSS is for things that I want available until I get around to reading them or until I manually mark them as read, this is precisely what I don't want out of facebook.

I suspect parent was talking about the distributed nature of it. Subscribe to whoever you want, allow anyone to subscribe to you.

My RSSing calmed down, it's just the XKCD feed now. But it still works perfectly.

Intruigingly given how many RSS feeds are out there from WordPress blogs, about the only way I found to get news on WordPress releases is via their RSS feed.

I don't see what would make basic facebook history hard to move around. Facebook of course already offers a "Download a copy of your Facebook data" function. Future-facebook could import it, or the most fundamental/basic parts of it, probably -- the problem is not the data itself, but the connections in it to other users, it's not that valuable without it. I suspect there isn't much interest from users in things-trying-to-be-facebook of importing the data.

I think it's the ability to communicate that's harder to move around, and why there's such facebook lockin. Your social network and ability to communicate with them in a public/group fashion.

>I suspect there isn't much interest from users in things-trying-to-be-facebook of importing the data.

And the legalities of that.

FB did a major legal smackdown on someone trying to do that.

Do you have a link to the history here? Would love to hear about it.

Curious for a link to that. Facebook lets you download your own data. But if you upload that data somewhere else, they're going to sue someone?

> History creates lock-in, and history is really hard to move around.

It doesn't need to be. For example, I moved my email between IMAP servers multiple times simply by dragging & dropping a swath of messages in Outlook. That Outlook (of all programs!) is the only tool that I know of that makes this easy is not a problem with the decentralized nature of email but simply with mediocre client implementations.

> That Outlook (of all programs!) is the only tool that I know of that makes this easy

Not sure what you mean.

It works as easy with Mutt. Open IMAP folder, select all, save to [other IMAP folder or local folder, as you wish].

Works reliable even with very large folders, you just have to wait a bit longer until it's finished.

Practically every email program can move email easily. It is somewhat trivial.

Recently we moved all our company mail with IMAPsync. Automated commandline tool. Awesome.

Thunderbird can also do this.

> history is really hard to move around

Is it? I think there's a way to get around that, if the protocol defines standards for verbs.

Of course, it's difficult and can grow wild, yet very much possible.

That's Sir Tim(othy). The "Sir" attaches to the given name, not the surname.

I've never quite figured out how sirs work. I know Knights were addressed with sirs and the Queen had something to do with it, but how does it work in a multicultural UK?

Knighthood is a life award. You are given the rank of Knight Commander of an appropriate order or made a Knight Bachelor (no particular order; lower precedence when it comes to deciding who walks in front of whom). The knighthood dies with the knight. Again, the "Sir" attaches to the given name ("first" name, even if that comes last or somewhere in the middle, or is a compound name in itself). In general, the "Sir" is only used for address or casual reference; you'd use the post-nominal letters in a more formal written setting.

Debretts is probably the place to find this out.

Here's the rules for putting the letters after names: https://www.debretts.com/expertise/forms-of-address/letters-...

The Joint Forms of address gives you some clues about the order of words before the name: https://www.debretts.com/expertise/forms-of-address/joint-fo...

Note that most people don't know about this stuff, and don't really care about it.

I've never understood why people (particularly outside UK) care about some "sirs" at all. "Mr. Tim Berners-Lee" sounds just fine to me.

Why use a title at all if you're going to use the wrong one?

He's still a mister. It's whether you buy in to "this is a male adult" vs. "I must respect this person's title because HM The Queen bestowed it on them".

Mr Berners-Lee surely won't feel offended because the person addressing him isn't a [UK] royalist?

Calling him Berners-Lee sounds like you would rather not use titles at all, which seems like a defensible position. Calling him Mr Berners-Lee sounds like you're going out of your way to deliberately use the wrong title, which just seems rude.

All adult males in the UK that you don't know the first name of but do know the surname of are called "Mr $Surname" as a social norm. Adjusting that designation because the Queen says so is very common too, but maintaining the status quo is just being apathetic towards the Queens edicts.

Most of her subjects do not complain. Some of them think we should have done away with such notions of reigning over other people already.

FWIW outside celebrity circles it is often considered quite gauche to insist on such pompous titles.

If you insist I'd be happy with calling him Professor Berners-Lee (even assuming he doesn't still have an active official professorship). If you want to talk about proper titles then is Sir Berners-Lee also rude when it should, by the right of HM EIIR be "Professor Sir Berners-Lee" [when spoken]?

If you're worried about recognition I think Mr Berners-Lee is not lacking in that department and was already party to the Queens inner circle as a holder of the Order of Merit which appears to be a far greater royal honour than a "mere" KBE. He has my utmost respect, if you doubted it.

That's not the case though, just as, more obviously, 'Mrs' isn't limited in meaning to 'this is a female adult'.

Increasingly and to the point of arguable totality, such titles are bestowed by popularity, committee, and HM Government. Damned shame, since it would mean more what it should were it not given to celebrity riff-raff for 'services to sport'.

I digress. You say it's fine as a non-'UK royalist' to use an improper title; I say I bet most of the world doesn't use even Mr, and I'd do my utmost to pay proper respect to local custom and any honours.

Does local custom work both ways? I think most Americans would consider "Mister" perfectly courteous.

I'm not sure what you mean by working both ways, I'm saying as the speaker one respects the subject's title, even if not 'recognised' natively.

For example, I'm not Catholic, but I would of course refer to Pope Francis; not 'Mr Francis', or anything involving his birth (as opposed to regnal) name.

I think the norm is the current surroundings, not wherever the subject is from. Dr. Smith is Smith-sensei in Japan. The New York Times is known for its use of courtesy titles, and it calls knights "Mr.".

Pope is a high office, and Francis is his chosen name. It would be rude of you to call him by his birth name; it would be rude of him to insist you call him "Your Holiness", although you probably should if you're visiting the Vatican.

Do you similarly never refer to people as Doctor, Reverend, Minister, or Coach so-and-so? It's part of his name. If Sir Tim didn't want to be knighted, I feel confident he wouldn't have been. You'll note that his wikipedia page calls him Sir Tim, and I feel confident he could change it if he so chose.

You are clearly free to call him whatever you want, but I would advise you to be honest enough to realize that you're projecting your feelings about royalists onto him, and not calling him what he has chosen to be called.

In practice you're right, i call people what they indicate their designation to be. Where I work we've had a person call themselves something like "Grand Wizard-Admiral of the Terran System" - some people like a pompous name.

If a person is an MD, professor, and such then that's totally acceptable. If the designation is "the Queen says you have to call me sir now" I don't really see what business it is of the Queen's or that it's more respectful to recognise a person because "the Queen smiled at them" than recognising their actual work/effort as fellowship of a learned society does.

It's an anachronous system of nepotism; bleurgh. What's not to like./s

Knighthood isn't a royalist issue. Australia has knighthoods bestowed by the prime minister. Unfortunately our PM decided to knight Prince Charles...

A knight of Her Britannic Majesty Queen Elizabeth the Second's Order of the British Empire. Yeah, nothing to do with royalty /s.

The could become a republic and everyone would be a knight of the president instead.

Because it's a nice way to recognize some distinguised people, that's why people care about sirs.

Not sure I'm following you -- yes, the Queen knighted him. How does the multicultural part come in? (Women receive the same award but it's called a Damehood and their prefix is Dame.)

If Kazuo Ishiguro is knighted, should he be addressed as Sir Kazuo or Sir Ishiguro? The point being that people of Japanese descent might place their first names after their last names.

This question doesn't seem very relevant to knighthoods. Whatever would be appropriate to call someone as their given name, rather than family name, would go after "Sir". It's not very complicated.

Sir Kazuo. With some exceptions - Japanese people "normalize" their names to English name order of Given Name, Family Name when writing or speaking in English. Some choose not to, usually out of national pride, but it is most common to normalize the name to the target language.

[0] https://en.wikipedia.org/wiki/Japanese_name#Japanese_names_i...

A sidebar issue: all these rules apply to knighthoods from the UK crown only. And under these rules non-Brits may be knighted but do not use the title 'sir'. Thus Bill Gates KBE is not known as Sir Bill.

(edit: Today I learned that Kazuo Ishiguro is British. The statement below is completely wrong. I've managed to get his nationality wrong while enjoying his books for 20 years!)

So in the specific case of Kazuo Ishiguro, the sir does not apply. For Brits with family-first names, see the other answers.

Kazuo Ishiguro is British. He's not called "Sir" because he has an OBE. He needs a GBE or DBE for the "Sir".


And were he not British, he would not be entitled as such - Gates and Geldof iirc are such examples; sometimes (incorrectly) titled 'Sir' in press.

That's a language issue, not a culture issue. Normally, one uses the name ordering appropriate to the language one is speaking at the moment. Therefore, the question is equivalent to "What is the Japanese translation of 'Sir'"

It's fortunate, then, that it's Sir GivenName, not Sir FirstName.

So does one export their existing network information to these open formats? I don't think the companies who currently hold that information will part with it willingly.

If it catches on, I assume a bunch of services will pop up offering to export you data. All they'll need is your password.

But how would you verify that YourFacebookFriend is YourFacebookFriend on some other service?

This is a solved problem, but not one that would catch on. See: PGP

If my friend can sign a message using the same key on both services - and I already trust that key is them - I can be almost certain it is them. Regardless of the service.

This is one of the first use cases for keybase.io - verifying identities across a few main social sites. (E: For clarification, Keybase makes it easier to find these proofs. It isn't necessary as part of the proofs.)

For example, you can find me on Reddit or Twitter and know it is me. You can also see my website URL and know that I own it - and a bitcoin address where you know I will receive the money. This is because I've verified that I own these accounts using PGP - and I've done the same for HN in my user profile. See: https://keybase.io/nadya

>This is a solved problem, but not one that would catch on. See: PGP

It's only 'solved' if you don't care about usability. PGP quickly becomes a nightmare when you consider the day-to-day things 'normal' computer users will go through (private key stolen, password to private key forgotten, phishing attacks to sign replacement keys for friends, etc).

They already do allow you to export your data, right?

Yes, some of it, but not your actual network information (friends, contacts), at least not in a way that is meaningful outside of those services.

Sort of off topic, but this is exactly what I want in electronic medical records. Being able to take my data and give it to whichever healthcare provider I go to.

The goal is solid, but I wonder how it might prevail. The flexibility to change at any time doesn't seem a priority for most people. Most people search for a satisfactory solution and stay with it until it's no longer bearable, or a competitor many times better appears. The same for most things in one's lifestyle.

> There is nothing special about what social networks like Facebook do, the biggest challenge is probably serving so many requests and storing lots of data with high availability.

The difficulty with decentralising is that these are the biggest challenge to centralised systems like Facebook; they're problems which are very nice to have ("oh no, we have a billion users!").

The biggest challenge for decentralised systems is getting momentum behind an agreed-upon protocol.

Email was born when there were few people who needed to agree, and their major concern was being able to send/receive messages. As the Internet expanded, the momentum behind email ensured that it could ride that wave as well.

These days the Internet, and hence the number of people who need to agree on the protocols, is huge. Think about, for example, HTTP2; how many organisations and individuals were involved in its definition and publication? How many more have been involved in implementing it for various browsers, servers, crawlers, libraries, etc? How many more have been installing updates, editing config files, etc. to use it?

Another change is the concerns of the stakeholders; if all we cared about were sending/receiving messages, then we could use something like telnet, HTTP, email, etc. Instead, the concerns are more fuzzy (e.g. "presence", "likes", etc.) and more political (silos vs federation vs p2p, expectations of privacy, encryption, archiving, searchability, etc.). Reaching agreement on such things is very difficult, as everyone wants different things.

The results either:

- Have buy-in from too-few people (e.g. diaspora, pump.io, gnusocial)

- Are so general as to be inappropriate for most particular tasks (e.g. RDF; an interesting example considering that RSS is a stripped-down subset of RDF, and is/was widely adopted; also compare to ATOM, which took that subset, threw away the ties to RDF, and became more interoperable!)

- Only offer tiny featuresets (e.g. the various microformats)

"email" isn't even a single protocol. Even just talking about SMTP you'd run into lots of "common edge cases" where different servers respond to errors differently, a lack of TLS uniformity, variations in auth, variations in host detection, etc. And that's before you take collection into account (POP3, IMAP, IMAP+, ActiveSync, etc).

Email is a mess of kludges and frankly could benefit from a ground up re-implementation. In fact many have tried. But like with creating a decentralised social network, it doesn't matter if you have an arguably better implementation if you don't generate the snowball effect attracting users to your "better" platform.

> an interesting example considering that RSS is a stripped-down subset of RDF, and is/was widely adopted

RSS 0.9 was RDF, as was 1.0. 0.91–0.92 (and the 0.93/0.94 drafts) and 2.0 had nothing to do with RDF.

>These days the Internet, and hence the number of people who need to agree on the protocols, is huge.

These days however it's not the people who decide, it's organizations.

>Think about, for example, HTTP2; how many organisations and individuals were involved in its definition and publication?

Many may be involved, but very few can decide whether to use it or kill it. Namely, Google and Facebook would be enough. Descending down to individuals, it's about 3-5 from Google and one from Facebook.

That was essentially the promise of RDF, OWL, The Semantic Web, Web2.0 and so on. After swimming in that stuff for 3 years in the early 00's, I could tell back then that it was mostly a boondoggle, and even though those technologies have found their niche, the project failed for the same reason that the original WWW was successful.

It was like 'AI' in the 80's, but with XML.

The feeling I got was that most of those were just too general to be suited for any specific purpose. You could, in principle, build a piece of social networking software out of them, but there was never a product that you could just deploy to your web server and use.

That's the nice thing they did in the 90s and before. They created standards like FTP, HTTP, Gopher and whatever and people could build upon these standards. Now we are back to everything being proprietary,

Have you heard about Mastodon?


It's like a decentralized version of Twitter, with federalization. Might need to find a instance that's not busy with signups though.

Mastodon is (mainly?) a new implementation of GNU Social. It federates with existing GNU Social servers (which, as StatusNet go back to like 2007 or so). I hear it's a good quality implementation, though.

This this this.

I really like the idea that Eben Moglen seems to pitch in every of his recent talks when talking about FreedomBox:

A decentralized network of cheap "plug-computers" that run Facebook-like functionality on the hardware of the users. And yet, while FreedomBox is a great project and does many things right, it omits the one crucial aspect of this: the decentralized social network.

Let's hope Solid or something similar takes over the social networking world eventually and we can get rid of all the megacorporations hoarding our data.

What about GNU Social? https://en.wikipedia.org/wiki/GNU_social

I run a GNU Social site. It works well. It's a bit slow and the themes are out-of-date, but if it had more of a following I'm sure we could turn it around.

diaspora probably was the most popular failed project which tried to achieve this. It did not really fail per se, but it's pretty clear by now that it will never reach impactful adoption.

I was rooting for those guys and I'm still following their releases, but I can't see it ever being accepted by the masses.

Even email is no longer really decentralized. Google and a few other email silos now make up the bulk.

At least it's still mainly interoperable with 3rd party solutions through open standards. I host my own email and as long as you adhere to the guidelines (use DKIM, make sure your IP range isn't part of some blacklist, triple check your postfix config etc...) you don't feel like a 2nd class netitizen.

Not quite. You still (in my experience) need to go through obscure web-forms to avoid having gmail/hotmail black list your domain. And even then gmail is way too aggressive with its spam labeling. I've seen replies to an email thread be marked as spam by gmail when the sender is in the recipient's address book.

Gmail considers anything beyond big corporate email providers as second class participants - and so does the other big providers.

Maybe I've been lucky but implementing DKIM, DMARC and SSL SMTP support seems to have been enough to make gmail happy (I even get the small "lock" icon in gmail inboxes as a bonus).

I've used one of the many blacklist check websites to test my domain and only ended up with two false positives which I've rectified through their web forms.

It was mostly smooth sailing from there.

Note that this is using a dedicated server as a host, a while ago I tried the same thing from my home ADSL (with a static IP) and that was a pain since those "home" addresses are generally blacklisted by default since people tend to assume that emails from those addresses are sent by compromised computers.

Nowadays I know many ISPs even block port 25 by default so that's not even an option anymore.

Sorry, but no. That is entirely false. I run a mail provider for companies, and as long as you adhere to standards and do not emit spam, you don't have to do anything like filling out obscure web forms. You may, on occasion, hit a spam trap due to a compromised account or similar, but de-listing on good RBLs is time-based and automatic. Be vigilant, take care of security faults in your network and, when solved, take the affected IPs out of service for a while.

This may not apply if you are a low volume sender.

I used to think so too. But sometimes, at seemingly random email ends up in outlook/Gmail spam boxes. My solution has been to not really care - I don't email that many outlook users from my private email. Let Ms/Google fix their end, after I've done my bit (dkim/ssl - I'm not a fan of dmarc).

But if I were running email for any new organisation I wouldn't have that luxury ofcourse - I'd have to pay for a Google/outlook or other major provider account - or spend a lot of time "reversing" the reputation of the ip/ip block my dedicated servers had - and as none of the major providers provide a simple check like rbl etc - it's a frustrating guessing game where your test-mails to your own Gmail(etc) account go through, but mail to new senders is silently dumped to the spam folder.

I don't mind greylisting, mail bouncing or blacklisting. It's the magic "intelligent" filtering which is annoying - because the only indication of error is that you don't get a reply - because your recipient haven't seen your mail.

In a way it's worse that it's rare - you never know if you've fixed the issue, or some arcane combination of your sending configuration and email content (an email not in English?! Probably spam!) will mean a new recipient will never see your mails.

I'll echo Simias. If you behave like a first-class citizen and follow all the current standards and protocols, then you'll be accepted as one. I've been running private mail services for decades, and by keeping pace with changing expectations I've never had a problem with deliverability.

That might be mostly true but it's not always completely true. I'll raise your anecdote with my own:

We've been running on the same domain+IP for over a decade. Hotmail block us when we're whitelisted by the original sender and also responding to an email and also have corresponded before. We've never sent even a multi-person email never mind spam, always as a response to an enquiry, very low volume.

We're blocked apparently because Hotmail has an intermediate level warning on a sender domain on another IP address managed by the same ISP; we were un-blocked before but they now say they won't allow our email through - we have SPF but can't work DKIM with our shared status. The only silver lining is they actually responded inside a fortnight to tell us that they're going to continue blocking our [at most] once a week emails to our customers who happen to have hotmail/live/outlook addresses ...

To work around I made a live.com address to send through for customer who use MS email. ~4 days later they started blocking it from remote access ... something like not enough direct website logins for the amount of IMAP lookups (a single client checking every hour or so during work hours). Few log-ins seemed to fix it.

We can pay to have the spam service MS use whitelist us though, apparently.

By sharing hosting with someone with a bad reputation, and then deliberately working around their controls, I have no surprise whatsoever that they drop your email. That is not behaving like a first-class citizen, that is behaving like an underhanded reprobate. Sorry to say that your anecdote is an illustration of "how not to do it".

You have to accept and work with changing expectations.

If you are unable to relocate to more reputable hosting, and your messages are legitimate, solicited, authorised, well-formed transactional email (no promotional mail shots), I suggest you switch to delivering through a squeaky-clean provider like Postmark.

Using there own service within their terms and conditions is being "an underhand reprobate".

They could just obey their own customers white-listing, or over-ride blacklisting for domains that act appropriately.

So you really think that having a domain of 10+ years standing that doesn't send unsolicited emails at all is "not behaving like a first class citizen"?

If someone in your ISPs /26 sends spam from a different IP and using a different domain, that totally means you're a massive spammer who's ruining the internet??

I could of course change ISP but which of the 15+ yo ISPs will have a spammer somewhere in their IP block, no way to tell.

So the only option left is pay to be able to send email to MS addresses. And you're fine with that? It's all our fault for replying to emails?

I think you have numerous misconceptions here. You can't pay directly for status. Longevity without change doesn't entitle you to anything. Ten years is not great longevity; try doubling that. Microsoft doesn't not believe you are a "massive spammer"; they are just playing the percentages based on your choice of associations and your choice to abuse their service. Finally, you seem to think you have a contractual relationship with Microsoft that obliges them to accept your email on the basis of "Terms and Conditions" which, if I'm not much mistaken from what you've described, don't apply to your relationship with them.

Your expectations don't match reality and you seem to prefer complaining to change.

I have basically no sympathy at all, I'm afraid. It is your fault, not for "replying to emails" but by failure to update infrastructure as the context changes. This is your problem to fix, and it appears you've made poor choices in that direction based on a misattributed sense of entitlement.

Perhaps the key here is "for decades" and "reputable ip". I do run my email from a dedicated box - but I host with hetzner. Their ip blocks might suffer from "bad neighbour" effect.

But how is such an effect not an issue? It makes it hard to "elevate" a box at home to a proper mail server, and makes it hard for new generations to host their own mail (until an ip6-only world (and probably also then) all new users will need to recycle old ips).

As for dmarc, I'm still not entirely convinced it's a great solution to smtp/spam issues. Dkim is already annoying enough if you occasionally want to send mail through/from a different smtp - like Gmail if you don't run your own webmail.

This is why I use Fastmail, and encrypt my Google Drive data. It isn't any more secure as far as I can tell, but I'm tired of 2-3 companies holding all my easily accessible data in one place. Especially when they are openly using that data for advertising.

These days I'll always take a paid service over a "free" one where I'm the product.

But that requires scrutiny as well. Just because you're paying does not mean that they are not profiting off of your data.

I trust google reasonably well when they say they do not scan G Suite (paid) products for advertising. https://support.google.com/work/answer/6056650?hl=en

After Snowden leakage, I don't think you should take their word for it.

This is absolutely true. I guess I'm avoiding people who explicitly use my data for profit, in favour of those whose profit model doesn't require it. The trust is that Fastmail et al. are successful enough, and honest enough to not use it.

Ultimately I'd love to host everything myself. But, at present, I don't have the technical knowledge to trust myself to secure everything.

On top of that, on an even more emotional level, the principle of myself being the product doesn't sit right with me.

You still can have all your own data though in a useable form, all you need is IMAP. So it's better than social networks albeit not as decentralised as we'd like

I expect getting that many requests would be a bigger challenge than serving them.

I've heard of diaspora being described like email.

Help build Urbit.

This is why I'm a fan of the indie web (https://indieweb.org/).

I'm sorry if this has to do with my sleepiness, but from that page I don't understand at all what this is. A kind of social CMS? A bloggers community? The page is rich in word count but poor in being informative. I'd be very grateful if somebody could give a little explanation of what this is, because, it seems to be an interesting thing.

IndieWeb is a social movement rather than any one particular technology or project. It's a group of people that believe in publishing as the web was originally intended: a diverse network of participating individuals, not corporations, bound by open standards. There are a number of projects under the general Indieweb banner or just sharing the IndieWeb principles.

The other reply gives a good answer. I'll suggest reading over the about page (https://indieweb.org/IndieWeb:About), I think it's more informative.

Indie web is about people owning their data and having control over it. It's about being able to build what you want for yourself. It's a direction, a set of ideas and principles that some people enjoy.

I like indieweb but I'm not on board with their 'design first' slogan. I think to be most effective such a thing should be 'content first'.

Content, in terms of quality; and subsequently curation/censorship?

Otherwise, content falls into a specific set of mediums (text, images, video, sound), all of which are assumed on a modern platform and aren't worth reconsidering. Design and presentation, however, has infinite possibility.

> Design and presentation, however, has infinite possibility.

Most of the possibilities get in the way of the content.

How do you get curation and censorship out of a bunch of people publishing things separately on their own websites?

That's my point. Why should content be the first priority if it's qualifications are not open to discussion?

then you'll like https://yunohost.org

Tim Berners-Lee seems to be much more idealistic about how the web should be in that regard, because he knows it's the right thing to do.

And yet he decided to be "pragmatic" on web DRM, instead of taking the same idealistic approach because it was the right thing to do.


I had a similar realisation.

What's the product in a social network?

It's you.

Appreciate the realisation but this really is repeated ad nauseum and is not an original thought in the slightest

a thought/concept doesn't need to be original to a wider group and can still be profound for the individual. see "just about any subject".

:) That goes for television as well.

Ad-supported TV. Other models exist: BBC, PBS, HBO, etc.

This is true, unless you pay for the social network, or it's p2p.

It's not purposefully evil or inherently wrong, just an inevitable consequence of the business models involved in free-to-use social networks.

I'm trying to explore whether a paid social network could be a workable model with https://postbelt.com

Please check it out :-)

thats true, but they also provide lots of useful features to users.

It may be good, after all we are talking about private information. Some degree of control and exposition are welcome.

The problem is that such control is not imposed by you, but by the network owner.

This goes to show how hard winning Turing award is. One would have expected someone who invented the most useful invention of the 20th century to have won this award long time ago. Maybe I am just overvaluing www because of the impact it had on people's lives.

EDITED: 20th century, not 19th.

I always felt that Tim Berners-Lee was not respected enough in both the computer science and programming communities. I felt it especially after working for over a decade at Google, which literally built its entire business on TBL's architectural concepts.

For example, Google and other search engines would not work without the principle of least power [1], which a lot of people, including Alan Kay [2], somehow don't understand. That is, if the web language was a VM rather than HTML, there would be no Google.

It would also not have been possible for the web to make the jump from desktops to cell phones as the #1 client now. You know the handler in iOS and Android that makes <select> boxes usable? That's an example of the principle of least power.

I recommend reading his book "Weaving the Web" [2] if you want to learn more about the story behind the web.

I'm very glad that TBL is getting this recognition. He is a genius and also has a very generous personality.

People in the programming community seem to talk about Torvalds or Stallman a lot, perhaps because of their loud styles, but I don't see that much about TBL.

Ditto in the CS community. "HyperText" used to be a big research area but I guess TBL solved it and people don't talk about it anymore.

[1] https://en.wikipedia.org/wiki/Rule_of_least_power

[2] http://www.drdobbs.com/architecture-and-design/interview-wit...

[3] https://www.amazon.com/Weaving-Web-Original-Ultimate-Destiny...

>For example, Google and other search engines would not work without the principle of least power [1], which a lot of people, including Alan Kay [2], somehow don't understand. That is, if the web language was a VM rather than HTML, there would be no Google.

Kay's criticism of the Web is very well justified and (like most of his high-level criticisms) typically misunderstood. He doesn't criticize it as a repository of hyperlinked documents. He criticizes it as platform for application delivery, which it became. Modern web with all its scripts is a VM -- and badly designed one at that.

I think people understand the argument, they just don't fully agree with it. Personally, while I agree that we have ended up with a fairly poor universal VM, I find it to be a fascinating example of path dependency. Maybe we would be better off with one repository of hypertext and one well designed universal VM, but what path would we have taken to get there?

The hypertext repository was so compelling that everyone installed software to access it. Then that universally available software was so compelling that people found ways to run increasingly complex applications on it. And that's how we naturally found ourselves where we are today.

I don't see any reason to think that engineering a more perfect solution all at once would have worked better than this natural progression.

Well said. Kay's misunderstanding is that it could have been any different.

He thinks that you can just design something nice from whole cloth and people will use it. That's why his designs aren't deployed.

I've been looking at projects like OOVM and going back in history to Self and SmallTalk, and there's a reason that those things aren't deployed. Don't get me wrong -- they're certainly influential and valuable.

But he's basically confusing research and engineering, as if engineering wasn't even a thing, and you can just come up with stuff and have people use it because it's good. You need a balance of both to "change the world", and TBL has certainly done that.

Another analogy I use is complaining about the human body. Like "who designed this thing there there trachea and esophagus are so close together?!? What an idiot!!!" Or "why are all these people mentally ill and otherwise non-functional members of society? Who designed this crap?"

The point is that it couldn't have been any different. It wasn't designed; it was evolved.

>Another analogy I use is complaining about the human body. Like "who designed this thing there there trachea and esophagus are so close together?!? What an idiot!!!" Or "why are all these people mentally ill and otherwise non-functional members of society? Who designed this crap?"

Okay, so what's wrong with discussing the limitations of the human body and the ways to improve it then?

Yes, the web evolved instead of being designed (however much that distinction makes sense), but arrived at a shi^H^H^H suboptimal result. And it arrived there through deliberate design decisions of people - who unfortunately were designing a different system in the first place.

It's like English. I love English, but it's a bloody mess that we're all stuck with now - except that changing a computer system is comparatively easy to changing the direction of a language.

It's great to discuss ways to improve things, but that is different than suggesting that the whole thing is rotten to the core and needs a re-work from the ground up. The productive way to do this is to identify specific deficiencies and propose targeted incremental improvements to address them. This is what all the people involved in various standards and implementations on the web have been doing for years. This is working, progress is just slow and difficult, as it is with most things that are worth doing.

>He thinks that you can just design something nice from whole cloth and people will use it.

Because that's exactly what they did in Xerox Park, many times over.

>That's why his designs aren't deployed.

No comment.

>But he's basically confusing research and engineering, as if engineering wasn't even a thing, and you can just come up with stuff and have people use it because it's good.

Kay has many talks about the difference between invention and innovation (which are much better terms than ones you're using). In fact, his analysis of this difference is probably the most insightful and though-provoking technology talk I have ever seen:


Of course, this subject makes a lot of developers highly uncomfortable, hence a lot of shallow, ignorant, knee-jerk dismissals. "Everything is incremental." "Everything is the only way it could be." "This is fine." And so on. Thing is, Kay worked at Xerox and Apple. He read a myriad of books and research papers on computing, which he constantly references in his talks and writings. He worked and continues to work with some of the most forward-thinking people in the field of computing. In late eighties he foresaw most of the current computing trends - which is verifiable via YouTube. Even without any context his talks display a considerable depth of thought. In short: unlike some people, he actually knows what he is talking about.

>The point is that it couldn't have been any different. It wasn't designed; it was evolved.

And that is why someone who designed it just received a Turing award. Makes perfect sense.

Edit: Regarding your other comment here.

>If the web is a genius for hypertext, but not for app delivery, then he should have just said so. That is not a very hard sentiment to express. "The Web was done by Amateurs" doesn't capture it.

He has several decades worth of talks and writing. If you haven't bothered to familiarize yourself with at least some of them to understand what he means it's your own fault.

Edit 2: I meant, of course, Xerox PARC.

If that's true, then it's not the fault of the web's designers. Suppose you are an architect and you a build a house for a family of five. Then someone buys the house and it into an auditorium. And then they say, "Wow this is really shitty place to hold concerts -- the acoustics are terrible. What a bunch of amateurs." Is that your fault?

If the web is a genius for hypertext, but not for app delivery, then he should have just said so. That is not a very hard sentiment to express. "The Web was done by Amateurs" doesn't capture it.

But I don't even think that's true. If the web were really bad as an application delivery platform, someone should have supplanted it by now. Alan Kay or someone else should go design their own awesome VM for application delivery. I guarantee you it will fail, for reasons of fundamental to its design, while TBL's platform succeeded for reasons fundamental to its design.

I don't want to detract from TBL's accomplishments, especially on this occasion but, he didn't solve the hypertext problem insomuch as he decided that the really hard things like provenance and bidirectional linkages weren't important. Google and Facebook did solve those problems, but only for commercial benefit. Moreover, it's clear that TBL regrets that early decision to the degree that he rails against the walled gardens.

But that's exactly what I disagree with. Design is as much about what to leave out as what to include.

Leaving out those things wasn't an accident or ignorance as Alan Kay claims. There were a lot of very conscious design decisions involved in the web -- again see "Weaving the Web".

He may regret that the web has evolved into walled gardens, but what could he have possibly have done about it? There's to prevent that decades in advance, at least not without strangling it from birth.

I don't think that you and I are in disagreement about the TBL’s design competence: I stated that his decision to set aside the problem of provenance that preoccupied other hypertext research was intentional.

But, your argument is perilously close to begging the question that the success of web is a good thing. Now, I happen to think that the web is a net good (no pun intended), because (among other things) it helped break Microsoft’s hegemony and continues to force OS vendors to provide and support a standard universal computing platform (albeit a crippled one). But, it’s also arguable that the success of the web may have set personal computing back by a few decades, while also exacerbating a bunch of other problems like wealth inequality and reduced privacy/sovereignty, because it turned the Internet into a big modem.

I’ll take a look at Weaving the Web; thank you for the recommendation. To you, I commend Jaron Lanier’s Who Owns the Future.

>Moreover, it's clear that TBL regrets that early decision to the degree that he rails against the walled gardens.

he rails against the walled gardens while at the same time putting things into the standard, like EME, they powers them.

I think the EME thing is a symptom of a worse problem, which is that the internet is very much controlled by a handful of commercial interests. Whoever controls browsers can become a bad actor and you either go along with it or end up with a broken system (this site only works in IE).

The goal of advocacy is to rouse a sympathetic collective that you can leverage at the bargaining table. But, when the time comes, you play whatever hand you have.

The W3C has to negotiate with industry. But, unfortunately for us that care about the open web, its position is weak.

"The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs."

-- Alan Kay

That quote is pretty unfortunate. I guess nobody's perfect.

Kay is bang on.

1. http://worrydream.com/TheWebOfAlexandria/

2. https://en.wikipedia.org/wiki/Link_rot

3. Comment elsewhere in this thread about how we have been forced into turning a document system into a VM.

He's absolutely right, and also completely wrong. It's like listening to the blind men argue about the elephant. "It's a particle!" "No, it's a wave!" https://news.ycombinator.com/item?id=2119057

With all respect to Tim Berners-Lee, I think the principle of least power is overrated. For example, even binary Horn clauses are Turing complete[0]. Whatever configuration language you can come up with (HTML, CSS, or whatever), it's either Turing-complete or not, and if it's Turing-complete there is no danger of executing infinite loops as long as you have some cutoff for resource requirements. The only thing that matters is whether the code is properly sandboxed before it tries to do any I/O.

[0] https://www.ps.uni-saarland.de/Publications/documents/devien...

Not sure I agree with you overall, but I do agree with the distinction between computational power and I/O (or capabilities). That's a very important design criteria for languages for distributed systems.

I think the problem would be that the computation would get cut off at different points on every machine, leading to an unstable ecosystem. Remember the browser has to run on devices with at least an order of magnitude difference in resources, probably 2 orders now.

Generally, you want to guarantee that your style computations terminate. Now it appears that CSS doesn't actually provide that guarantee, since it's Turing complete :) But I guess it's close enough in practice.

Looks like if you want to be famous, you have to be loud. Maybe that explains why he has been vocal against walled gardens lately.

Thanks for the recommendation. Let me check it out. Let me search for it. WWW is very important to my life.

Torvalds didn't add anything to CS. He just wrote a Unix-like kernel. The kernel took off and the project grew to include thousands of devs and became the biggest OSS project ever, but still, it's a kernel, not much different in essence from any old Unix kernel. Git? Again, popular project, nothing new. It was even started contemporaneously with Mercurial, which is essentially equal to it, and both are no new inventions at all, DVCS was here since the 90's.

Stallman did Emacs in whose invention AFAIK he did take part, and then he took part in implementing some other innovative projects, though I don't really know his CS career (I mostly know him as the face of GNU and FSF).

IDK but having created a popular project should not be equal to a big innovation in the field.

"IDK but having created a popular project should not be equal to a big innovation in the field."

So you seriously think that "new" is better, then "well done"?

I don't think it makes sense to compare those 2 things and value one higher than the other. Of what use are innovations, if you can't use them in a "popular project"?

> So you seriously think that "new" is better, then "well done"?

No. But Linux is not more well-made than, say, the kernel of any modern BSD, or that of illumos, etc. Git is not technically superior to Mercurial et al. Torvalds' success is certainly a big, admirable one, but it's a different kind of success than that of Berners-Lee.

Also, while it's the most popular kernel, it's not like we'd not have anything we do have today if it didn't exist, it's in essence an ordinary kernel that came out in the right time. It's those who made the distros and reverse-engineered the drivers and ported/packaged thousands of programs who made Linux a big thing.

WWW, on the other hand, is an invention. It's something that did not exist, and it transformed the world like nothing else.

"but it's a different kind of success"

That was my point ;) Otherwise we agree ...

They didn't contribute to the science. They made very important contributions to the implementation of the science. Stallman also made very important contributions to the political/legal landscape.

I certainly do not overlook that, but is that the type of contribution that brings a Turing? I mean scientific equipment companies are of utmost importance to any science lab, but they don't get nobels, do they? Linux is at the heart of infrastructure today, but it basically is a Unix like kernel. By that logic the author of cURL should get a Turing award too.

> This goes to show how hard winning Turing award is. One would have expected someone who invented the most useful invention of the 20th century to have won this award long time ago.

Claude Shannon never won it.

That's an interesting point. I interpret the prize's criteria "for contributions of a technical nature made to the computing community" to be less in Claude's space and more in the applied space. Scrolling down through the list of past winners, I've implemented several of the work of winners and am using the WWW right now to communicate this.

On the other hand, I think I use Claude's work in the same sense that I use Kirchhoff's laws everyday.

Stephen Cook, Micali, Goldwasser, Rabin, Scott, Karp, Hartmanis, Stearns, Manuel Blum, Yao and Valiant. These are all pure theoreticians; I didn't include data structures people and applied-ish cryptographers like RSA and DH.

More than a significant fraction of Turing awards have been won by theoreticians.

I see a number of theory guys on the list. That's the thing that's interesting about Shannon's exclusion: for any given explanation, there's a counterexample on the list. I've never really known anyone involved very seriously in the ACM, those guys probably know why he was excluded.

Let's hope he will be awarded one day, posthumously.

Diffie and Hellman won it only last year, 40 years after their seminal paper was published! It is indeed a very tough award to get.

Odds of becoming a US dollar billionaire are higher than odds of winning it in your lifetime.

In this case scarcity correlates to value. Given a choice I think I would take the Turing Award. :)

Same here. I will rather win that prestigous award.

I'd take the money. You can't take either with you and with a billion dollars I could get more than a few things named after me.

Things which will likely all be forgotten. Name 10 billionaires from the 19th century. Now try naming 10 scientists.

Go with the Turing Award if you want to be remembered.

Trick question.

> Forbes magazine updates a complete global list of known U.S. dollar billionaires every year. John D. Rockefeller became the world's first confirmed U.S. dollar billionaire in 1916

Stop being a pedant; I think you get my point.

If you want to play, note that I did not say USD anywhere in my comment. Not to mention that Forbes was founded in 1917, and so it clearly has no data collected to "confirm" an earlier billionaire. We can go back to the 14th century if you like (https://www.wikiwand.com/en/Musa_I_of_Mali):

> During his reign Mali may have been the largest producer of gold in the world at a point of exceptional demand. One of the richest people in history, he is known to have been enormously wealthy; reported as being inconceivably rich by contemporaries, "There’s really no way to put an accurate number on his wealth" (Davidson 2015).

But thanks, I actually never knew that about USD billionaires.

Everything will be forgotten. The only question is "when".

Well, I'd argue that (for example) Warren Buffet will be forgotten long before the invention of the transistor will.

But what about the inventor of the transistor? I already don't know that name. (But on the other hand, I don't know many names)

I meant the inventors of course. They shared a Nobel prize: Shockley, Bardeen, and Brattain.

Anyways, it was just an example.

That's a nice way to put it :P

How do they choose the winners? Also I guess how CEO's get credit for the doing of a company, TBL gets credit for the Internet?

It seems more appropriate to credit TBL for "The Web" than for "The Internet"...

Then again Steve Cook who proved the existence of the class NP-Complete was awarded the Turing Award only 11 years after his paper was published: http://amturing.acm.org/award_winners/cook_n991950.cfm

I believe the most useful invention of the 19th century was possibly the cotton gin :)

Pish-posh, the cotton gin was invented at the end of the 18th century.

Patented 1794. Checks out

Fixed. 20th century.

No I think you're spot on. It's easy to see the web as an obvious invention with hindsight, but just imagine a world where tcp/ip is the only protocol available to you. I could easily imagine a world in which someone thought to create an across-the-wire binary specification rather than a really simple text protocol. The genius of http and html is that they are very simple. I remember right after the dot-com bubble crashed a bunch of people basically said the web was dead and that we should be passing binaries back-and-forth. Inventing something simple and useful that the average person can easily pick up is the work of a genius (and yes, the average person can write html, my mom doesn't understand how the internet even works, but she writes html for her job).

I think you meant 20th (or 21st) century.

Fixed. Thanks.

Transistors are much more important IMHO.

Inventors of transistor, led by William Shockley, already won a Nobel prize in Pysics for their invention of transistor. All 3 of them.

It's hard to top antibiotics in terms of useful inventions.

"the Haber–Bosch process, a method used in industry to synthesize ammonia from nitrogen gas and hydrogen gas. This invention is of importance for the large-scale synthesis of fertilizers and explosives. The food production for half the world's current population depends on this method for producing nitrogen fertilizers."


Yes they are very useful. I was only counting computer related contributions since Turin award only recognise people who contributed to computing community.

discovery, not invention.

(But I guess developing them into a useful product is itself an invention. Maybe there's no bright line between discovery and invention.)

I suppose our world could function without antibiotics, but not without the transistor ...

I remember doing internet research before there was the web. For example in the late 1980s I studied California earthquakes. The USGS had a public ftp site where they put daily epicenter map images (jpeg?) and text databases of epicenters. I wrote automated scripts to download new stuff.

I remember there even internet search engines at the time. People wrote automated scipts to look for public ftp ports and compiledbthe results. The internet only had a few thousand addresses at the time, so it wasnt an ardorous process. I think one of these databases was called archie.

I personally think it's important to remember how the Internet worked with TBL invented the Web. Most importantly, everyone believed and implemented net neutrality. I think it's a very interesting thought experiment to imagine if we would even have the web had the University of Minnesota decided to drop all http packets to promote their competing gopher protocol.

The Web is the best example I can think of for why we still need net neutrality.

I do not share the

> The Web is the best example I can think of for why we still need net neutrality.

I think the Web is best example of why we need privacy protection as it is a form of communication.

I personally pay for a VPN. And I hope my right to use independently implemented (independent from Google/Facebook/ISPs) privacy preservation technology and de-anonymization-resistant technology is not lost, and instead encouraged by the legal system.

In my view, the so called 'net neutrality' is simply a deceiving name for 'a subjective favoritism' given away (as political/economic favors) to some web-monetization players vs others.

Eg. ISPs cannot favor/unfavor content providers, but yet Search engine company can favor/unfavor/rank content providers.

So ISPs are labeled as 'evil' while search and social networking service providers are the 'good doers'.

Yes, Archie[1]. Wonderful resource until it was superseded.

[1] https://en.wikipedia.org/wiki/Archie_search_engine

Extremely high signal:noise ratio.

jpeg wasn't invented until 1991, so probably not. :) Most likely they were GIFs.

GIF came after patenting of LZW I think in 1985, making it on to the internet in ~1987.

PCX, BMP (1985?) or TIFF (1986?) maybe? PBM and PGM I think are slightly earlier but I can't readily find dates for them.

When I was an MIT student I worked one summer at CSAIL. The office I was in had only a single occupancy bathroom. One afternoon I went to use it and just as I got there I saw that TBL was exiting. Once I went in and sat down, I noticed that the seat was warm!

Highlight of my life right there, having my butt warmed by the residual heat of the inventor of the www's butt.

Ahem, in any case, congrats to him! Definitely well deserved.

Hmmm... so you have a TBL number of 1, for some definition of TBL number.

A side channel of an extremely low bandwidth... 5-10 bits / hour?

> the inventor of the www's butt.

Ah ha! I did suspect he was not acting alone - who did the other bits and bobs?

You should have licked the seat, so a part of him could become forever a part of you.

For anyone who doesn't know, the www is a a "hypertext" project where documents can contain links to each other. By clicking these links, a wide variety of content is exposed. For example, consider how much easier it is to use Wikipedia than to have to look up references in the index (page numbers by word occurrence) of a printed book. You can just click on a link and follow it. In all, most people spend most of their online time reading pages that have hyperlinks in it, which they follow.

It's certainly a paradigm shift similar to what Gutenberg accomplished.

TBL created the web, certainly - but hypertext prior to the web goes far back as an idea (at least as far back as Alan Kay - also an ACM Turing Award winner, IIRC)...

The word, at least, was coined [1] by Ted Nelson, published 1965. The idea goes back at least to Vennevar Bush's Memex proposal [2] in 1945.

I haven't heard Kay mentioned as an early proponent of hypertext, I'd be interested to hear more if you can find it.

[1] https://elmcip.net/node/7367

[2] https://en.wikipedia.org/wiki/Memex

Yeah I hadn't heard Kay mentioned with hypertext either. I think he is profoundly mistaken about it too (or at least he was until very recently): https://news.ycombinator.com/item?id=14035411

While hyperlinks are the essential contribution (which however existed long before the web already), I think the foundational work of encoding text digitally and universally (pioneered by SGML inventor Charles Goldfarb, James Clark, the TEI project, and many others) isn't honored adequately here.

I think his work on educating people about the web ever since it's creation is pretty awesome.

His list of questions for kids is great, especially the one on 'what happens when I click a link' [0]

  [0]: https://www.w3.org/People/Berners-Lee/Kids.html#What1

Probably want that in-line, as a clickable link:


TBL is a great role model. He saw an important opportunity to make the world better and he executed on it selflessly.

I do think he probably should have started a browser company or something though. The nice thing is that his beautiful dream for the web is (slowly) coming true.

I love looking at that NeXT box he used as the first web server: https://en.wikipedia.org/wiki/CERN_httpd

Question: did TBL invent the WWW as part of a job-related project while at CERN, or did he invent it as a personal side-project using time/resources of his employer?

According to Wikipedia and himself, he wrote up a proposal and was encouraged by his boss to implement it as an "unofficial" project: https://www.w3.org/People/Berners-Lee/FAQ.html#Influences

You can read the original proposal here: https://www.w3.org/History/1989/proposal.html

I love how the announcement[1] on Usenet of the first release of the "WorldWideWeb wide-area hypertext app" contains a sentence that seems sort of unlikely and almost arrogant, yet within four years had proved to be a prophetic understatement.

> This project is experimental and of course comes without any warranty whatsoever. However, it could start a revolution in information access.

[1] https://groups.google.com/forum/#!topic/comp.sys.next.announ...

That's a great read. I didn't realize Bill Atkinson's HyperCard preceded Tim Berners-Lee's proposal by a few years, or that Berners-Lee had referenced HyperCard directly in it.

The applications of hypermedia were clearly in the ether in thay era, but that document certainly pulls a lot of the threads together.

Wow, I'd never heard of it. Those old Apples had some cool shit I missed out on.

  "HyperCard was created by Bill Atkinson following an lysergic acid diethylamide (LSD) trip."

Jobs criticized Gates saying he'd be a better product designer had he spent more time exploring individual pursuits such as what Jobs did--spirituality and psychedelics.

Oh man, HyperCard was my childhood. I made SO MANY games in it.

We need more bosses like that

The difference is a lot less strict in academia, which CERN is strongly connected to.

This story makes me feel good. I feel TBL doesn't get his just recognition from all the people who take advantage of what he made for us without even knowing the origin. Good on him.

This is perhaps the most proportionately insignificant prize given for any invention in the history of humanity.

Is it a coincidence that the award is announced on 4/04?

What's special about that date that makes you ask that? Edit: Oh, I get it now. 404 Not Found.

"I’m humbled to receive the namesake award of a computing pioneer who showed that what a programmer could do with a computer is limited only by the programmer themselves"

That's like the exact opposite of what Turing proved, with the halting problem and all.

I'm any case, a well deserved award!

If you're interested in the history of the Web and its inception, I highly recommend you this (non-technical) reading TBL wrote some years ago. Really inspiring. https://www.amazon.com/Weaving-Web-Original-Ultimate-Destiny...

I was watching the penultimate episode of Halt and Catch Fire Season 3 this morning while doing my morning run at the gym. Towards the end of the episode, Donna faxes over a document about this great new invention, she recently encountered as a VC, called the 'WorldWideWeb' to Joe. Pretty coincidental :) Thanks, Tim!

It was few years back when some one ask on Quora for - 'Who Still Not Get Turing Award For Their Work?'. I was the one who name Tim.

Glad man get credit for his work.

I've met him on a few occasions. I used to be active in Semantic Web and I'd run into him at the Cambridge Semantic Web Meetup -- Massachusetts, USA not England).

He seems like a generaly good guy with some real forward thinking and I like many for the ideas he has. I'm happy for him.

So the ACM is now handing out Turing Awards for people making the FOSS community build stuff around an exciting, open technology, only to bait and switch and poison it with DRM?


I'm very happy he's not only won the award, but also that he's used it as a springboard to highlight ongoing privacy, security and information freedom issues.

Personally I think that's a mark of a great pioneer who is still using his capabilities for the best interests of us all.

Offtopic. I was wondering what happened to guy who invented email. I kind of remember reading that he was living in poor conditions. Did a search on "inventor of email" and found a controversy about some guy who claims to be inventor of email(Shiva Ayyadurai). That's not the guy I was looking for. Instead I was thinking of Roy Tomlinson, as I found his name in an article called: "Ray Tomlinson's 'story' about inventing email is the biggest propaganda lie of modern tech history: Shiva Ayyadurai"[0]. But seems like Ray has already passed away.

[0] http://economictimes.indiatimes.com/magazines/panache/ray-to...

> I was wondering what happened to guy who invented email

I'm afraid he died last year.


He invented the URL, which is the most important component of the web, since this is what makes cross-system hypertext possible. Hypercard was not networked and Gopher were not hypertext, and neither allowed you to link to external objects. The genius of the web was that it was not a closed system, but via URL's allowed links across information systems and protocols. This is probably why the web succeeded while all the other walled-garden systems failed.

Note that TBL never expected HTTP and HTML to replace all other protocols, they were just intended as a hypertext system which could connect to all the existing systems like Gopher, NNTP and so on, thereby increasing the usefulness of all the systems.

From the article:-

"conceived of the web in 1989 at the European Organization for Nuclear Research (CERN) as a way to allow scientists around the world to share information with each other on the internet. He introduced a naming scheme (URIs), a communications protocol (HTTP), and a language for creating webpages (HTML). His open-source approach to coding the first browser and server is often credited with helping catalyzing the web’s rapid growth"

So no, not hypertext, not SGML, not MIME: URI, HTML (an application of SGML) and HTTP.

HTML wasn't an application of SGML: its syntax was clearly influenced by SGML, but it wasn't actually SGML. It didn't become an SGML application until a few people convinced the working group to make it so for HTML 2.0 (there was no HTML 1.0 standard: the earlier drafts I'm aware of defined how to construct an SGML document from an HTML one and explicitly stated it was not expected an SGML parser would be used, which I'd take to imply that HTML was not an SGML application).

A two-minute review of his Wikipedia page would answer your question (or ad-hominem attack). If you looking for inventions, he designed and built the first web browser and the first web server[1].

If you're criticising him for building on top of what others did, no man is an island. He did improve on what was already there. Why not read the details for yourself?[2].

In Tim's words: "I just had to take the hypertext idea and connect it to the Transmission Control Protocol and domain name system ideas and—ta-da!—the World Wide Web."

[1] https://www.w3.org/People/Berners-Lee/WorldWideWeb.html [2] https://en.wikipedia.org/wiki/Tim_Berners-Lee

He literally invented the 'WWW'.

The question raised be the GP is whether the amount of originality deserves a Turing Award, given the other things (TCP/IP is even missing from his list) that either make the WWW possible or are similar to it.

Nothing about it was original or invented.

The world benefits more from effectiveness than from originality. Building on existing work is a good thing.

Why not contribute to the conversation and tell us why what we're reading is incorrect?

He did: Gopher also does hyperlinking, and was a contemporary (1991 public releases)

The still-used command line browser Lynx was first made for Gopher in fact, before being adapted for HTTP, which it still does.

Gopher were not hypertext. It had hierarchical menus which were separate from the plain text files.


Tim Berners-Lee is not the equivalent of the W3C (though he did vote for EME).

I think the main reason to allow this is that otherwise the only effect is that the same thing will happen in a non-standard fashion with Apps and other binaries being the way to access protected content.

That said, I'm not happy with it either and I wished that the web would remain DRM free, the only reason it has the adoption that it does is because it was open from the beginning.

Finally, the way to vote as a consumer is with your feet: simply refuse to access DRM protected content and it will go away all by itself.

Bullying in schools will happen anyway. Should we create a standard so that it's done properly, in a way everybody can reproduce ?

That's a dumb analogy. Bullying is one-sidedly bad, hence there being a bully-victim relationship which is non-consensual.

Content owners have - in principle, legally backed - the right to distribute their content in any way they see fit, and content consumers have the option to refuse that content if the format it is presented in is un-acceptable to them, this is a consensual producer-consumer relationship.

If you feel 'bullied' to consume content with DRM the problem lies with you, you do not have an automatic right to content in a particular format. If you do not agree with that the solutions are to be found in the political realm, not in the technical realm.

>>Content owners have - in principle, legally backed - the right to distribute their content in any way they see fit, and content consumers have the option to refuse that content if the format it is presented in is un-acceptable to them, this is a consensual producer-consumer relationship.

you are giving content "owners" (and I use that term losely, because legally they are Copyright Holders not owners, copyright is not property) many more rights that are simply not granted by copyright law.

DRM is used in way to massively exceed the authority granted to creators by copyright, they abuse DRM and you claim the rest of society must just "take it" because "that's the market"

Well there is no "market forces" when it comes to content because the government grants upon a person a monopoly over a copyrighted work, this exhausts many market forces. Sure you can claim a general market of "Movies" but due to the nature of the product these are not interchangeable widgets that normally constitute a market.

>If you feel 'bullied' to consume content with DRM the problem lies with you

The "being bullied" is a weak argument, i will give you that, but what about platform or time shifters. Should the MPAA be able disable my bluray player with a code on all new bluray discs? Should MPAA be able to tell me which HDMI cable I must buy and which monitor on my PC is acceptable?

Should the MPAA be able to force me to use Chrome instead of Midori.

If I buy content today, that can play on Firefox, but then next year google revokes the Widvine license firefox uses, it is acceptable that now the content I paid for is unusable by me?

Don't buy blueray discs and don't use Midori, don't use any content at all that is DRM encumbered.

Yes, that will limit your choice but so what, if enough people do it the coin will drop.

That's my solution to this whole DRM issue and I've yet to find a 'must-see' thing that was DRM encumbered.

Copyright law gives rights-holders all kinds of options, and digital tools have given them options to make it harder to do some things that are not per-se violations of the law but that are also not explicitly granted as rights.

And that's a huge difference from a legal perspective.

What you don't realize is that the solution you suggest won't work if famous people like TBL publicly support DRM.

People wouldn't care about your opinions much, some would even call them dumb just like you call others opinions dumb. But when a respectable person like TBL says something people tend to take his words seriously.

This is why people were worried about what TBL did. A very few would have cared if you or I did such a thing.

You can also stop driving if you support the environment and stop buying phones if you care about child slavery and stop using electricity if you don't want nuclear wastes.

Personally I stopped eating meat, and just this is quite a cost in additional constraints in your life, because society is designed in a way and doing something different requires extra efforts.

Yes, boycotting something you don't want is a good move, but if you apply it to all things you want to protest against you will no longer be part of society.

So you can't only rely on that. It can't be the only answer. We need to take stand to also refuse those bad moves, especially before they happen.

Does this mean it's ok to sell food with poison in them because people have the option to just simply not eat them?

A figure like TBL openly supporting something like DRM sends a message, and it's not a good one.

Another pointless analogy. Food with poison in it is illegal to sell. Supporting DRM sends a message that you do no agree with (and that I do not agree with), we can agree that it's not a good message but we are not without power in this: you and I can simply decide never to use that feature and not to support companies that use that feature.

Copyright law is here to stay, rights holders will try to use technology to be able to squeeze every last $ out of their legally backed position and consumers have the collective powers to give those rights holders the finger.

The fact that consumers as a group don't care enough is the main problem, see also: privacy and many other items like this.

> Food with poison in it is illegal to sell

Is is not. Only poisoned food that kills you quickly. You have plenty of legal food that harms your body or gives you diseases that are legal. We just call them junk food, alcohol, and other names because they are morally accepted.

You accept DRM as fact of life. We don't. Because it isn't. It's just the next move pulled out by the majors to try to lock in the consumer. It brings zero benefits to society. No culture sharing restriction ever did, and all the stats in the world show that what they claim to protect against is a scam: majors are making more money that ever.

> The fact that consumers as a group don't care enough is the main problem

It can be said about any problem in society. Health, education, whatever. Regulations are not the solution, having people caring is.

Still, we do have regulations.

Idealism is not going to get you very far in the marketplace. Accepting reality as it is and your place and power within that reality is key to both not feeling continuously frustrated and actually achieving some measurable change.

You are essentially trying to ignore the fact that the Berne convention exists and that it (and not ideals) govern the position of rights holders. TBL is a pragmatist, first and foremost, that is why we have the WWW, not because he is an idealist who has forsaken his idealism.

As such, his benefits to society are such that few people can claim to have changed the course of history to such an extent, if you wish to argue that DRM is not a 'fact of life' then you are out of touch with life.

Major rights holders 'making more money that [sic] ever' is not automatic, they require our cooperation and consent.

And that consent and cooperation are ours to withdraw.

Any other changes are political and likely an uphill battle.

While I agree with your conclusions, I nevertheless don't think we should endorse a bad status quo.

When Firefox came around (as Phoenix), it didn't endorse ActiveX, making a lot of sites unusable. It chose fair standards.

What would have happened if they decided that they were too small and that Microsoft was so big that ActiveX would be inevitable anyway ?

Note that eventually the DRM won't affect me. I'm tech saavy enough, so I will always find ways to get around them. I think they arm society as a whole, and that you should say no.

I had an interview proposal with google some years ago. I refused politely. Google is inevitable, I still do have a few gmail addresses and use the SE. And it pays well. And their projects are cool. But I said no because I believe I should not be part of it.

I'm doing my part.

You don't get a democracy if you are not doing your part. Even if it means you will loose. Strategic vote is just promoting immobility.

I'm all for pragmatism, but it has to be used in conjunction with bigger goals.

Now you don't have to be perfect, I'm certainly not, but I still think that M. Lee decision sent the wrong message.

Have a read:


See if that changes your mind.

You were not responding to me, but sorry, it didn't.

> Moreover, a case could be made that EME will make it easier for content distributors to experiment with—and perhaps eventually switch to—DRM-free distribution.

I can't see how the author made this leap.

> It doesn't matter if browsers implement "W3C EME" or "non-W3C EME" if the technology and its capabilities are identical.

It matters as a matter of principle. It would have sent a message. Maybe this would have made the W3C irrelevant, but if it did, so be it, at least they would have gone without compromising.

I see you already discussed with sametmax about this, so I won't go further.

The whole rational behind abandoning the web is a scarecrow. Like saying rich companies are going to abandon a market if we don't do them a favor.

As long as there is a market, they will come.

The web is too big of a pie to let it go.

But even if they decided to go, it wouldn't be a bad thing. Proprietary things on proprietary platforms, and less people trying to destroy the open platform. I'm all for that.

I also couldn't help but wonder why this suddenly happened after TBL making news merely a few weeks ago.

It can be difficult to make those decisions. But I would say TBL is definitely worried about the future of internet. (Decentralized Web Summit - From The Internet Archive) https://www.youtube.com/watch?v=Yth7O6yeZRE

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact