It gives users a multitude of rights such as being informed about exactly which kind of data a company has about them (and even get a digital copy of that data), how the company uses that data and for which purposes it is used. And if you're subjected to algorithmic decision making (e.g. an algorithm decides if the bank should award you a credit) you have the right to know which kind of algorithms were used in the process and to contest the decision. You also have the right to demand the deletion of your personal data and to revoke the right of a company to process it, as well as to demand correction of inaccurate data. The legislation also allows for severe fines and punishments for companies not respecting the regulation (up to 4 % of yearly turnover of the whole company group), so even companies the size of Google or Facebook should have strong incentives to follow the regulation.
This is probably the biggest change. Previously, at least in Europe, the emphasis has typically been on allowing people to know what data was being collected and to require correction of inaccuracies, but much less on whether it was actually allowed to be collected in the first place or allowing data subjects to require deletion of data.
I'm a little worried about whether the practical implications of this have been properly considered, which is something the EU has historically been quite bad at doing when it comes to technology and business laws. For example, under (65) in the regulation, which is primarily about rights to have personal data deleted and the "right to be forgotten", I see few provisions for a business keeping personal data that it legitimately collected with the subject's consent even if the cost of deleting it is prohibitive. An obvious example would be data that also exists in backups taken during the period of storing and processing that data, all of which would need to be updated in non-trivial ways to remove the data from them.
Arranging proper backups at all is not something to take for granted when you're dealing with small businesses that have many other things to do, but obviously they're important for safeguarding the provision of products and services to all customers and it's important that any backups that are made are handled with proper regard to both security and integrity.
Requiring businesses to separate every tiny item of data that might ever be legally required from every tiny item of data that is collected and used with consent for reasonable purposes, just in case some customer one day decides to retrospectively withdraw their consent for some or all of that data, could easily become absurdly disproportionate. I hope it would go without saying that incentivizing businesses not to keep backups of all important data because of the compliance overheads is insane.
However, without such fine-grained separation, two weeks might be far too short a period to keep backups. To use my own businesses as an example here, we have accounting and reporting obligations that potentially require several years of data. The reporting information is typically derived once a year during reporting season, from straightforward records kept in the main databases and/or spreadsheets. However, we would probably have to completely restructure those records and denormalize all kinds of things in order to delete everything we don't strictly need for some legal purpose, which would be a huge amount of work.
That's just the structure of the original data. Then you have to consider things like deduplication in online backup services, where it's practically impossible to guarantee the complete destruction of all instances of certain data without destroying all backups that ever involved that data and starting over.
If this is the situation for small businesses that typically only collect a small amount of personal data in the first place and for obvious and necessary purposes, I shudder to think of the implications for organisations that actually process personal data as part of their main purpose rather than incidentally. I'm not sure it's reasonable to assume, in general, that it would even be possible to totally separate legally required data from everything else in such organisations, and there would surely be a lot of grey areas.
Now, please don't misunderstand me. I'm all for reasonable regulation to protect individuals from exploitation. I'm a privacy and civil liberties advocate, and I run my own businesses the way I hope others would run theirs, even if sometimes that means not doing things that would probably make us more money because they also make us feel uncomfortable. But there has to be a sensible balance, and the EU does not have a good track record of balancing its business regulations sensibly. (See also: EU VAT, cookie law, various provisions in the last round of consumer protection rules, etc.)
In General, I think requiring that companies do not hold on to your personal data indefinitely is a pretty reasonable regulation. If there were exceptions e.g. for backup data it would provide a convenient loophole for companies to keep the data.
Also, if companies keep copies of their personal data lying around it increases the risk of the data being stolen or leaked into the public. We have seen that even for the largest companies it's impossible to avoid "losing" data once in a while, so making sure that this data contains the least amount of sensitive information possible is very reasonable. The regulation does not even assume that companies are malicious, it just assumes that sh*t happens and tries to mitigate potential damage to individuals.
I'm very wary of making that assumption, because so much data could potentially be personal data even if it's not obvious. Remember that the real criterion here is data that is or could be linked to an identified individual. With the kind of progress being made with data mining and analysis and the kind of processing power being devoted to those activities today, there are few safe assumptions any more about what becomes impersonal data just because it's been "aggregated" or "pseudonymised".
Let's consider a common example. Suppose a business operates a web site, and like most such businesses it keeps server logs. Those logs are useful for a wide variety of purposes and some of the data may remain useful for long periods, to allow analysis of things like how the site is being used or whether certain patterns are useful for detecting potential threats, or even to provide evidence that a customer did in fact use the services on the site during a certain period in the event of a dispute over charges.
In themselves, those logs probably don't inherently contain personal data. However, each record does have data such as IP addresses within it, which may be quite easy to link to a specific customer in practice and thus make everything in that record into personal data.
Now, suppose a customer who has been using that site for a while stops, and then files a notice to remove all personal data about them that the site operator isn't legally allowed to keep despite that notice. In order to comply with that request, must the site operator therefore delete all records based on the server logs, including any backups or derived data, to which that customer might be connected?
I can't immediately see why the site operator would be allowed to keep those records with a literal reading of the new rules. However, removing them would potentially undermine useful and reasonable business functions such as those mentioned above. Moreover, the cost of doing so might be substantial, and the adjustments required so the infrastructure used to process those logs can support this sort of retrospective editing might also be substantial.
In such a case, I think the balance would usually be too far towards the individual. The imposition on the site operator is great, both in the effort to comply with the request itself and in the damaging effects on reasonable business practices. The risk to the visitor of that potentially identifiable data being used for typical purposes in connection with server logs is low. Unless there are other relevant factors that point the other way (perhaps if the site deals with a particularly sensitive subject) the cost to the site operator is almost certainly disproportionate to the benefit to the individual.
Really would like to discuss this further, if you're interested feel free to send me a mail (discoverable via my profile)
2) How specific is "for which purposes?" I can't imagine companies would ever go collect explicit opt-ins for every new SELECT statement in their codebases. It seems like the only thing to do is list the broadest possible set of purposes upfront.
3) How do you decide whose data it is? For example, HN comments can't be deleted because they're considered the internet's data, not yours.
What if I sync my contacts containing your phone number, or a photo of us together? If you demand the deletion of "your data" then do my contacts and photos disappear?
What if you make a scene at some establishment, or default on a loan? Can you demand that their record of "let's not do business with this person again" go away?
This is one of the most tricky areas in deciding what is reasonable. Obviously there are advantages to being able to share personal data about other people for your own benefit. On the other hand, every time you do that, you are potentially giving someone personal data without the subject of that data's consent.
I don't think we fully understand the implications of modern technologies in this area yet. However, I suspect we'll be learning some lessons the hard way over the next few years, as the correlation and processing of that data starts to catch up with the volume of data that's been collected.
It's also possible that some of the people giving up that data about other people, or more likely the businesses that encourage individuals to do so, are going to come under a lot of scrutiny even in terms of compliance with existing law and regulations. For example, if you install a social network's app on your phone, and that app uploads your contact list complete with names and phone numbers to their database, then both you and the social network have obviously just compromised the privacy of everyone on your contacts list. That much is black and white, but if the social network concerned then uses that data for any purpose other than providing whatever services you are explicitly requesting in terms of the contact list you already had, then from a data protection point of view it becomes a lot more just black. I'm a little surprised that data protection regulators, particularly in Europe, have taken such a hands-off approach to this issue for as long as they have already.
Facebook emailed back stating:
"There isn't a Facebook account associated with the email address from which you are writing. This might be because you don't have a Facebook account or because you already deleted your account. In either of these cases, we do not hold any of your personal data."
I didn't ask for data attached to an email address, I asked for any personal data they had collected on _me_ and their answer was evasive and non-responsive. I didn't pursue it any further but I know they are bullshitting me, and I had to move on to other things.
The Internet as whole never forgets
So yes, the Internet never forgets (things it can make a buck off of).
Mailboxes might be a better example of your point.
I'd argue that the current web looks more like roads than mailboxes though. We would have something more like Minitel if we'd gone the mailboxes route.
How does it work with data you're legally required to keep? E.g. for tax reasons.
However, the processor would not be able to keep any data beyond what is needed for the purpose for which the exception holds, and the data cannot be used for anything else.
Financial institutions and the government are exempt for "necessary information".
But you're right, in the end it will depend on how severely the individual EU countries as well as the commission persecute companies that don't obey the standards. And like in other areas there will be fraud and companies trying to circumvent the regulation. All in all I think the standard for data protection will increase significantly.
You can worry about building a system when it makes financial sense compared to the time spent manually dealing with data requests.
More regulation is a good thing if things we as a society deem critical can only be accomplished through regulation.
For any smart consultants who are up on PII compliance and associated data security practices, there's an endless amount of work here.
Does it flow through to subsidiaries or EU citizen owned foreign enterprises? Because it seems like this could lead to a regulatory race to the bottom.
It would be great for technical blogs and news, project sites, wiki type data stores, discussion forums, etc.
Maybe everything in this "new" web is static, no stylesheets except browser-side for users to customise themselves.
I'm not sure what the actual answer is but I know the existing web is broken beyond repair.
If you ask a layman to differentiate between "the browser" and Facebook, they might not know the difference. In a lot of ways, Facebook is a browser for a set of content types like posts, videos, business pages, etc. Same logic applies to say YouTube: it's a specialized "subbrowser" for videos. If pressed I think I'd say the only reason YouTube or Facebook aren't browsers is because they're not decentralized. They browse a fixed set of end points.
AMP project, kind of similar, it's a specialized sub browser for mobile articles. And it's like what you describe, actually: constrained, but not decentralized.
I don't think we need a new browser or web to accomplish your ask
Browsers could specialize by running "sub browser" overlay applications. If you want to a constrained subweb with markdown-only sites, then sure, why not? It's not that different than a markdown-only publishing service-- the YouTube of markdown-- except the app logic is on the client side, and it's browsing a decentralized web
our markdown website feature is built in, but it could be moved to userland as an overlay, as an app that's triggered when the user turns it on, or goes to a certain type of site. It may sound like a stretch, but out bigger picture is to invert the relationship with services to create thick client side apps, in which case an overlay app is just a particular class of app with a particular class of permissions
Being realistic though, this would only serve highly functional computer users seeking simplicity and privacy.
A lot of internet users view "beautiful" webpages as a display of trustworthiness and importance. And that's a very important trait if you are a business.
Not so much. A lot of web users like simple, predictable pages. Flyout menus spazzing all over the page, giant headers breaking the space bar, and obscure hamburgers hiding useful features just confuse and annoy people who don't know about CSS element blocking.
We geeks, who can understand the code, decry the complexity of a beautiful page and prefer not to go there. But we're 1% of the audience.
The other 99% are used to equating beauty and style with quality. More style == more quality. Less style == less quality.
The pages we love because of their simplicity don't do well in the mass market because of their simplicity.
I have taken to calling this the "Kardashian Problem" - I don't understand why the Kardashians exist. I am forced to accept that they do exist. Therefore I do not share a worldview with the people who pay money for the Kardashians to exist. The Kardashians are very wealthy, so there are a lot of those people.
I am a taste minority, so building things that I like won't make money. I have to build things that the Kardashians would like. I don't understand what they would like, so I must test everything!
Why are they so popular? Total mystery to me.
> The other 99% are used to equating beauty and style with quality. More style == more quality. Less style == less quality.
I dunno. Try sitting down with an older relative and watching them try to use a heavily-styled page. There's a decent chance they won't think to click the hamburger. They may accidentally mouse over a flyover and get confused/annoyed when an unwanted menu covers the screen. Or they may try to use the flyover and fail when it retracts because they moved the pointer off the menu for a moment on the way to a nested submenu.
I admit the floating nav-bar breaking page down is a personal pet peeve, though. Seriously, folks. When I page down, I expect the text at the bottom of the screen to appear at a specific place near the top of the screen, and my eyes subconsciously jump there to continue reading. Instead, your nav-bar covers up some unread text, or changes size because "reasons," or you hooked page down and got the scroll distance wrong. You probably won't get it right, so please don't do it. If I want to "nav", I can go back to the top of the page. Or you can put the "nav" on the left, since vertical space is precious, and narrower text is easier to read.
They don't care if it's hard to use, as long as it looks good.
We're weird because we think form follows function.
How can a discussion forum signal that one message was sent in reply to other message? Or how can a wiki say that one word links to another article?
If you don't have a standard for all this (which means a standard for everything anyone could invent to run on top of this style-free web) then people will come up with their own way of doing it, and the browser should support different ways of doing it, thus people will start inventing new ways of signalizing that a message is a message, a user profile is a user profile, a link is a link etc.
Thus we end up with styles and the web as it is today.
I wouldn't say this is the actual answer, but it would go some way if browsers supported nicer default styling.
Businesses are always going to want their sites to look conspicuously "designed", because it signals to the user that it's a successful business that can afford to piss away money. But for the rest of us it would be nice if you could publish an unstyled website that looked a bit prettier / more modern than current unstyled HTML. Maybe with a modified doctype or meta tag or something to say "it's OK to render this with nice styling".
A lot of websites work and I appreciate those sites even more. Gmail still offering a HTML version for example.
Then there are those that don't like Youtube. I've had to effectively give it up or find a workaround like youtube-dl and mpv.
What does suck the most is I can't use my banking site!
If you must have some JS in your life, Sunday is my cheat day. All the JS and all the calories you can handle, one day of the week!
Whenever I see something interesting here on HN or somewhere else and the site wants me to turn on JS, I realize that the subject may not be that interesting at all and just close the tab. Big time-saver! Win-win. :)
It depends on what sites you browse. I don't do any social, use google or any other crap, so it's enabled to 0 internet sites with no problem.
If I need to activate it I have a shortkey (vimperator) that opens a new firefox profile (private and prepared to load crap) for the current site so I don't load any terrible UX in my default profile.
NoScript, ublock and SDC are a must for the current state of Internet. It's like drop the garbage out the window than in the garbage bin, you need to put care into it. :)
All considered it's an interesting developer experience and the sites I work on keep working even if some scripts don't load.
Well behaving sites end up on your whitelist. Everything else gets tossed into the "yet another bloated crap page" bucket.
If the web is broken, it's not because of the technology, but the shaky economic foundation upon which it's built.
What ad revenue did was wildly expand the number of people trying to exploit the web for revenue, by generating sheer quantity regardless of content. I agree that that's a shaky foundation to base the web on.
<a href="adverts.com?ad-id=12345><img src="adverts.com/ads/12345.jpg></a>
As long as information can be rendered then adverts can be included.
Granted, I think such a world on the web would be difficult for all the tiny blogs that earn (usually very little) money through ad networks, where the advertiser has no idea on which page their ads will show up. But for larger sites, the "fraud" issue doesn't arise as much (because they have a reputation to lose) and you might also just pay for being visible there. The Deck is pretty succesful in their niche with this model and I am sure for brands like the New York Times it would also work well.
At the end of the day if someone wants to promote a product and someone else wants to make money promoting products then there will be an advertisement model that will spring up
Wikipedia I think is a small picture of what it could look like. Especially if better monitors are enforced.
That's the price you pay for a web that isn't controlled entirely by governments or the media. The alternative is some service like Facebook determining what the truth is on your behalf, and you becoming nothing more than a pig feeding at their trough.
People cite disinformation, bad information and "fake news" as evidence that the web is broken, but it's not. That's evidence the web is working as intended - the problem is with the humanity it's reflecting.
My N=1 anectada seem to show the opposite:
More trustable correlates with no gimmicks and ads - Wikipedia, academic articles, expert's and organization's blogs, sometimes even HN.
Less trustable correlates with flashy - average online news, facebook & co.
(as I said, N=1, bare with me)
This should be on top.
That sounds a lot like the Semantic Web, another thing that Berners-Lee has promoted over the years but that has gained limited traction so far.
How is that different from anything you can already do?
Creating a new thing for producers to use when they could just do that with their existing medium changes nothing.
It's like devising new receptacles for discarding unused apple cores so people don't throw them on the ground. It's not going to help. People don't litter apples because they have to.
Can you elaborate on what creating a parallel, information-dense system would achieve? I mean, I agree the web could be more efficient and lighter weight, but there's nothing stopping us from doing that on the web.
Form handling, comments, user profiles and the like, along with the usual analytics and ads.
The resulting mess can be much heavier.
I'm seeing both cultures, and hoping that the lighter wins.
But the dichotomy is a interesting statement on how the wider public thinks.
The step to a default style
doesn't seem such a large one.
When enough people use Twitter Bootstrap, we could just support that natively in the browser.
Seems to me that the above and the points raised in point 2 sit on opposite sides of the spectrum. Either you get a free and open internet where everyone can publish content as they like or you police who and want can be published. The spread of misinformation seems to be a direct result of the democratic nature of the internet.
Just like children learning and growing wiser about everyday social deception etc., as opposed to policing everything people do and say offline.
Easier to teach people that they can be deceived or that bad things can happen, than to prevent any deception or bad thing from ever occurring.
The obvious examples are US news sources such as Fox News, Breitbart and Infowars . However, it could also apply to fast food eating habits.
On the one hand, I believe information and speech should be mostly unfiltered. If people want to spread information that denies the moon landing or climate change, or extols the existence of teapots revolving about the sun, by all means. I'd rather know who they are and allow them to publicly exercise their ignorance. I assume the public would help cause a correction of the zeitgeist.
On the other hand, the fact is that some people become deeply misinformed and do things against their own interests (e.g. voting for a candidate who promises to undo systems that benefit the voter), which can effect all of us. To spare us from going into the rabbit hole of a party politics, I'll just say I read a recent interview with a voter who wants to get rid of x, even though x in particular crucially provides them a life-saving benefit of y, which they want to keep. How does that even make sense? You delve deeper, and realize that some individuals just listen to the mantra of x being advertised as a terrible thing by certain media and public figures, which is easier to understand than to go into the details as to how specifically it's good and how it specifically could be improved upon.
I'd also add technology, like the internet and smartphones, is affecting our behavior far faster than we're aware and in some ways, I think we need to acknowledge our vulnerability. When media companies or groups of websites can cheaply spread misinformation, it is VERY HARD to combat it, because good information takes time to produce and interpret.
There's a reasonably good account in "Donald Trump breaks the conservative media" http://uk.businessinsider.com/conservative-media-trump-drudg...
In the health field, we've seen systematic lying by the tobacco industry and the sugar industry, and quite a lot of deception (some no doubt sincere) in the food industry. The "gluten free" craze is one example.
There's a (possibly apocryphal) quote attributed to GK Chesterton that says "When men cease to believe in God, they don't believe in nothing but in anything."
When people cease to believe their governments, their doctors, their honest fact-checked newspapers and so on, they are easily exploited by snake-oil salesmen.
You mentioned Brexit and Trump, which are case studies in the way people can be influenced by political campaigners, but I find a lot of the criticism of both of those results to be one-sided. After all, it's not as if the official Remain campaign in the UK or the Clinton campaign in the US were telling the truth, the whole truth and nothing but the truth either. However, there's less to be gained by fact-checking the losing side once the result is in.
Another recent example that I find more interesting is last week's UK budget. Much has been made of the announced rise in National Insurance rates for self-employed people. It's a controversial issue, because some people do exploit the tax system to pay less than they should by changing their employment status, but also because a lot of people who have never been self-employed themselves really don't understand how it works and tend to leap to conclusions that are objectively wrong. Sadly, rather than starting a potentially useful debate about different ways of working, different levels of risk/reward, and how the tax system should treat them, what has started is a discussion about how the party in power lied (because they gave a manifesto commitment before the last election not to raise the rate for this particular tax), and who can be made to fall on their sword this time to serve the entirely political purposes of who else.
In all of these cases, I think we would have been better off if we'd had a culture that fostered open debate and welcomed but looked critically at advice from those who might have more knowledge or understanding of any given subject. There are a lot of ways we could achieve that, but none of them involve communication channels that seek to influence which messages get through to promote a particular side of the debate. That threat is, in my view, even more serious than politicians who are blatantly lying, because we know some politicians lie a lot and can be sceptical accordingly, but without access to other information as well that scepticism might not make much difference anyway.
Of which newspapers you are talking? I don't think such a thing exists.
They all have trained journalists, sub-editors, and fact checkers. They all correct errors when they make them.
I take comfort in the knowledge that, even if facts don't ultimately win out, human stupidity cannot affect anything beyond this planet.
Even if we end up causing global mass extinctions, life will flourish again in an eon or two.
Even if we develop some sort of planet buster and literally blast Earth to bits, the dust will reform into something else and there's an infinite number of other worlds out there anyway.
Maybe you'll just make a heuristic argument showing that something grows as n² or 2ⁿ? For example, the number of possible relationships among n people grows as n². The world's population of about 7490000000 individuals would support about 28050049996255000000 potential relationships between individuals, or twice that many opinions that individuals have about one another.
That's a reasonable position, but sometimes no amount of preparation can give the little guy adequate protection against a big guy seeking to exploit him. That's why we have legal systems and regulatory frameworks in the first place, and providing a deterrent against big guys taking unfair advantage to reduce the number of times it happens and some sort of remedy for the little guy in the remaining cases is generally a good thing even if it's not a perfect solution.
Continuing this analogy - it actually turns out in practice that child abusers are almost never strangers, but someone known to the child, such as a teacher.
Its opinion, but there are definitely people out there who support this approach, such as Neil Postman. Its called crap detection.
The democratic nature of physical reality hasn't led to every publication turning into the National Enquirer.
Edit: also note that misinformation isn't necessarily defamatory and so even if online intermediaries were liable for content, there wouldn't necessarily be a legal remedy to suppress most "misinformation". For example, there's probably no aggrieved party who can sue to stop a hoax or urban legend about nonexistent persons.
This isn't about people being liable for spreading misinformation, this is about a business model that thrives on misinformation.
First, WWW is relatively young, so people haven't learned how to use it well.
Second, large platforms have been created to shift the democratic nature of the internet - make pseudo-open spaces that are not truly open, like various social networks, and give the users content that is a) tailored to them, so leads to less openness, less improvement and more mindless consumption and b) does it in an obfuscated way, via closed algorithms that prioritize you staying on the website to improving your life.
Those might be somewhat naturally occurring, but the degree to which they occur doesn't have to stay the same.
In my opinion, what's needed are privacy and freedom. And education for independent thought, rather than censorship.
Democracy assumes 1 person has 1 vote. This is not the case here necessarily.
Whereas EME is more, trust this native code please, and give it all the access it asks for?
If this has to happen, let's make it work in a way that doesn't let the stupidest ideas survive. Using WA still requires JS to interact with the user. Which is far better than the monstrosities that JS is being forced to perform at the moment.
Some simple, relatively sane things that WA can do, but JS is a bad choice for:
* Client-side encryption/decryption
* Socket management
As a second and last point to the above, I can't afford donating all my free time to help progress the decentralized internet anymore. I am 37 and I have a very happy personal life but need to work on my health a lot, I am very tired and burned out and I am finding myself unable (even if I want) to work for free without any reward in sight (not even talking about money; I am sure I wouldn't even be thanked). I imagine many others are in a similar position -- in terms of finances, in the health department, or in their general mental stance.
I very much like the idea of creating a "home internet box" which is a self-contained fanless machine connected to an UPS -- and it contains router, firewall, own website, own mailserver, own private Dropbox, a universal P2P node (BitTorrent / IPFS) etc., but as others have pointed out, our current stack of network technologies is way too bloated and full of incomplete standards -- which in turn are likely full of exploits and dark corners -- that right now the only seemingly appropriate course of action is to get rid of it all -- except the physical layer protocols -- and start over.
Try making an API app that works with anything else than HTTP and HTML/JSON. Tell me how that went for you. Try using ASN.1 as a data format, or a compressed secured IP layer protocol. Yes it's possible but it's much slower than it should be. Seems us humans always want to have one "universal truth".
It's extremely sad and I am afraid we'll live to see very oppresive times pretty soon.
That's a common assumption, but I wonder how true it really is. I've certainly talked with friends in their 20s and maybe early 30s -- people who have grown up with the Internet and ubiquitous mobile devices -- and had them express a sentiment that was more frustration than ambivalence. Sometimes they did find it creepy that they'd be tracked around with ads, or that their phone was doing things based on where they were or what they had planned to do later. However, they've never known technology to work any other way and assume there's nothing they can do about it, and they value the social aspects of sharing stuff online so they keep using these services.
I very much like the idea of creating a "home internet box" which is a self-contained fanless machine connected to an UPS -- and it contains router, firewall, own website, own mailserver, own private Dropbox, a universal P2P node (BitTorrent / IPFS) etc., but as others have pointed out, our current stack of network technologies is way too bloated and full of incomplete standards
It used to be common that your ISP would provide you with an email address, web hosting, and so on as part of your package. Everyone could set up a basic web site by just FTPing an HTML file up to their ISP's server, and then yourname.yourisp.com would show it to everyone, or you could get your own domain name and use that instead. Likewise for sending and receiving mail. Many countries set up their legal/regulatory frameworks to foster competition between ISPs, and so in practice we had a relatively decentralised Internet. You obviously still had the equivalent of today's lock-in problem if you relied on the email or web address your ISP gave you rather than your own domain, but you didn't have to.
It doesn't really take having some magic box in everyone's home to provide this sort of flexibility, though such a box would be no bad thing IMHO. We just have to stop doing so much through a tiny number of centralised service providers and social networks, and develop standards for interoperability and federation. The whole Internet was built on those principles, so I'm pretty sure we could do it for sharing data like mail and photos, and there are many interesting possibilities in terms of searching for data as well.
One of the other provisions in the new EU rules that come into effect in 2018 is effectively a right to export data from one controller so it can be processed by another, so people could potentially migrate all the data they've given to sites like Facebook or Instagram or Twitter or GitHub to some other competing service (assuming such a service exists). It will be interesting to see how that one plays out and whether it is effective in breaking the lock-in effects that have allowed so few companies to become so dominant in recent years.
I'd argue that how did the modern people end up indifferent to the growing centralization and surveillance is largely irrelevant. The sad result is still there. We all have anecdotal evidence and mine isn't more important than yours -- that's a fact. My point is that the result is still there and it's not changing for the better with time.
> It used to be common that your ISP would provide you with an email address, web hosting, and so on as part of your package.
> We just have to stop doing so much through a tiny number of centralised service providers and social networks, and develop standards for interoperability and federation.
I want you to know that I am 100% on your side first. But honestly, using the "just" word for these mega challenges is slightly naive.
First of all, most people hate the thought of "scouring through the net" for their news or daily fix of meaningless updates. There's a very good reason why the social networks are a successful format and that's not only because of corporate interests -- people like having only one source, it makes it simple for them and they love it. You and I disagree, but we don't speak for humanity at large, and the humanity at large seems to love to have a narrower view.
Secondly, advertisement supports a large part of the internet. I don't believe for a second that a serious decentralization effort will not be SABOTAGED by ad providers (maybe even including Google). They'll most likely plant paid trolls and fake news writers and then start shouting: "LOOK! DECENTRALIZATION IS BAD! Come back to us at Google, we have AI-backed fact checking!" OK, let me put my tinfoil hat away. Even if that never happens (that's a stretch IMO) we still have thousands of ad companies who will do their damnest to make their centralized website customers (namely Facebook et al) even more appealing than before and try to make the decentralized services look behind with the times, non-trendy, slow, user-unfriendly or whatever -- so the teenagers and the young people would continue to flock to them. In short, there's a lot of economical inertia behind the centralization and it won't be easy to kill it because there's a lot of financial interest there and the people holding such amounts of capital historically have never given up their wealth sources peacefully.
Thirdly, standards for interoperability and federation are attempted for probably decades now. I am not an expert in the field -- not in these standards, and not in the ego wars in the OSS communities -- but it's my opinion the pissing contests in the OSS communities are a huge impediment. Have you taken a look at the KDE / GNOME wars years ago? It's as shameful piece of the human history as any genocide; I'd even dare saying it's much more shameful because there are no lives on the line, not even any money on the line, just some basement dweller's ego and nothing else.
If we're to be able to resist centralization and surveillance, us the people who are against it absolutely positively must forgo any ego and become very scientific; there's already a pretty good consensus about most of what a decentralized hosting service must do (reference: see IPFS; seriously, do it, it'll take you a long time but IMO you'll emerge even better informed than before) but when it gets to the details, people either start flaming each other, or a dictator of an OSS project decides they don't care what any random person thinks and just moves forward without any scrutiny or feedback consideration.
This must stop. The agents who benefit from the centralization and surveillance are without a doubt dying of laughter how us "the opponents" are much more busy fighting amongst each other instead of coming together as one and offering an open and ad-free alternative to their services.
Finally, laws, EU or otherwise, sorry to say it bluntly, don't amount for shit. History has proven that if a big player has deep enough pockets then they'll get things their way, laws or not. Let's not go there. I think deep down all of us know the laws target the citizens and not the companies, 99.9% of the time.
That said, I really don't think developing standards for interoperability and federation is such a big deal in the grand scheme of things. After all, modern networking -- including the Internet -- is built on numerous such standards, carefully designed and documented, widely implemented and effective. If we can develop a stack of protocols for totally unrelated systems to talk to each other, from the lower levels of LANs up through things like TCP/IP and SSL to application level details like sending email between SMTP servers or requesting web pages using HTTP, surely we could standardise sharing content like messages and photos from friends without relying on some mysterious centralised service.
I don't know whether most people do prefer to have only one source for their information; I'd like to see more data before forming any strong view on that one. But let's assume you're right for the sake of argument. Is that a problem? We've had systems that collected and combined multiple streams of data for ease of reference for a long time, from the earliest days of e-mail lists with digests and Usenet newsgroups and RSS feeds up to modern Web-based aggregators like Reddit, the Facebook news feed, and indeed the site you're reading right now. Modern smartphones already combine even these feeds from multiple sources into a single stream of news and communications for ease of access. Is it really so far-fetched that we could cut out the middle-man in some of these cases and move back to a more peer-to-peer, decentralised system with neutral infrastructure?
I admit I have no scientific sources. This is just a gut feeling birthed from my numerous interactions with people during my whole life. I also happen to believe that not everything can be measured scientifically and there are a lot of things "all people know but almost nobody will admit in an official survey" and that "almost everyone prefers buying only one newspaper" is one of them -- but that would go wildly off-topic and I'll stop right here.
> Is that a problem?
Of course it is a problem. If we assume there are malevolent people who would want to suppress certain kinds of news (and with Trump, Kim Jong-un et. al. in power I don't think anyone can doubt the existence of these people anymore) it's much easier for them to bribe or coerce a single entity to censor things. For a true free speech, we need a fully anonymous but attack-resilient immutable decentralized network (sorry to keep repeating myself but... like IPFS). Good luck DDoS-ing or bribing/coercing that. It's made to be resistant against planting fake data, man-in-the-middle attacks, and DDoS -- right from the get go.
Admittedly IPFS isn't still there, for example it lacks automatic replication and the fact that there are already organizations whom you must pay so they "pin" (replicate) your content is telling me IPFS might eventually lose its credibility as well :( again though, off-topic. Sorry, you caught me in a very chatty mood.
>Is it really so far-fetched that we could cut out the middle-man in some of these cases and move back to a more peer-to-peer, decentralised system with neutral infrastructure?
In technical terms, it is not far-fetched at all. We're actually very close to it. Man, I'd love to work on that, only if that was my main source of income. I would pour a lot of energy and heart into such a work.
In economical and general reality terms however, it's almost impossible. As I mentioned, I am convinced there will be a lot of resistance from agents who would be negatively impacted in the pockets or their data collection. But you know what, if I am wrong, I'll be the happiest little panda.
Were all invented long ago and were all first of their kind. Creating a new standard when there are no wide spread alternatives is easy, doing the same where everyone is already invested is hard. E.g. that's why payment systems suck universally.
Even with payment systems, we've seen multiple contactless payment technologies become established very rapidly in recent years, and developments like Chip-and-PIN cards a few years before that. Of course online payment processing is also a much more developed and competitive industry today than it was even five years ago, which again is partly because both the technical and the regulatory frameworks have opened up in recent years. SEPA in Europe is a good example here.
Also, I'd argue HTTP/2 is not such a huge improvement as many make it out to be, but I can't deny it's some improvement compared to 1.0/1.1 -- that's a fact.
GSSAPI and Kerberos, for example, both predate SSL (Kerberos by nearly a decade if I have my dates right). SMTP was originally intended only to transit mail protocols across networks, clearly evidencing that there were internal (incompatible) mail protocols before then. UUCP and FTP were commonly used before SMTP to transfer messages; it took over a decade after its invention for SMTP to finally see off UUCP, and a few more years for X.400 (invented more or less concurrently) to fade away as a potential competitor.
I think his general point was -- even if there were some ugly corners of the technology, people were like "you know what, this is the N-th try and we really REALLY need this tech, let's move forward with it and fix the problem later". I think we all know how that usually ends, don't we?
It ends with a lot of legacy baggage and huge economic pressure to not change anything. So we come to our present dilemmas (outlined in the original post).
Referenced by the W3C, but surprisingly without a direct hyperlink, only by title. A bit strange considering the organization:
It happens everywhere... a constant drive to improve by building on top of layers and layers of abstraction - to the point where even the experts give up and view it as, well, magic.
I see technologies being replaced by something totally new from time to time.
What would be an example of this?
If you want more software examples, how about operating systems? The good thing about software is there's a low barrier to entry, so you can have open source projects. That said, those open source experts often know their trade due to working as technicians for business or government, or relying on support materials.
The further down the road technology progresses, the less accessible it is for individuals to create.
No. It would be a shame if software were forced to be only ever be simple enough for a single programmer to trivially reproduce. We would never have gotten past the terminal in that case.
> That means that if anyone wants to write a new web browser he won't be able to
Anyone can, they just need a lot of domain knowledge and time. They can also fork and edit existing open source browsers, or develop browsers which aren't quite so feature rich. But the complexity of the modern web is the result of generations of iteration on previous work, and of giving people a platform to express themselves in the way they want.
Most people want the feature-rich modern web, and that's impossible without browsers complex enough to deliver it.
Of course it is good to have browsers supporting more and more features. I'm totally for "the web" against "the native" thing. I hope all "native apps" die in a fire and everything starts being written to run in browsers, as everything gets faster and better.
I just commented about a serious drawback of all this. I don't know what would be a solution, however, or if there's need for a solution.
Of course anyone can if they have domain knowledge and time. Anyone could have built a web browser 50,000 years ago if they had knowledge and time.
Those who control the web browsers control the Internet. And web browsers have gotten so huge and hyper-complex that only wealthy corporations can afford to enter the market as new players.
Or do you meant to say Google, Apple and Mozilla are working to increase the features of their browser in the last years with the objective of controlling the Internet in their minds?
For the specific case of your example I prefer the current browser behavior. So what?
I've long desired for a formal semantics for CSS for example (but I believe flexbox is a step in the right direction in this regard).
If we're heading into a p2p future, my opinion is we're going to have to build it on web (HTML/CSS) technology for being able to leverage browser efforts, and for cross web/postweb publishing.
OTOH, more and more CSS ad-hoc syntax, and entirely new procedural web runtimes (cough WebAssembly cough) should be resisted.
Just take SASS as an example. Why does it exist?
Meet the new boss: same as the old boss, but now you have to learn to deal with them all over again. :-(
"We must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while avoiding the creation of any central bodies to decide what is “true” or not."
And who gets to decide who the gatekeepers are?
An oligopoly made up of elite companies is no better than a monopoly, especially when the members are demonstrably partisan.
This whole idea that we need some sort of authority to act as the Ministry of Truth is misguided.
You seem to keep coming back to this point, but the quote I gave before literally advocates the opposite of creating such an authority.
Kind of like the way to judge a person is how they treat those who are less fortunate, not how they treat their peers.
Moreover, in the world today, even cultures with broadly similar values such as the US and much of Europe often take very different views on this particular issue, which makes finding a reasonable consensus for a system that spans these different places... challenging.
What we need is a model where you pull information you request from distributed and diverse pools of public domain content.
Facebook is a telecommunication medium, which means it should conform to the telecommunications act, which says that there should be a level playing field, and an open network.
I guess TBL should best start thinking about the protocols in such open systems.
Like what Keybase is doing?
It's not really a global issue, it's a current affairs issue and one particular to a specific geography. And its not really an internet issue I think but a human one.
What I find interesting is that Trump is adopting the narrative that emerged to criticise him, to criticise media bias in general. That's interesting because political bias and misinformation can be separated - actual wrong reporting of facts vs bias of interpretation, but they can be argued to produce the same effect.
It's too easy for misinformation to spread everywhere.
Computer systems should be regulated for safety, which includes confidentiality and integrity, like everything else.
If the government had not intervened, those banks would have been bankrupted, and rightfully so, because they essentially made a giant bet on the housing market and lost.
The government intervention in the case of the banks prevented a valuable feedback mechanism from taking place, whereby the "bad players" (as in, bad at the game --- at gambling) would have learned from their mistakes. So instead of the negative feedback of bankruptcy, they got the positive feedback of bailouts, and we should expect to see another financial crisis in the not-too-distant future.
What remains to be seen is whether the government itself has a working feedback mechanism for this situation. Will they bail out the banks again? in other words.
What does "regulated" mean to you?
No matter what our worldviews, consumers won't obtain more capability and time. I'm a technical professional and educated person, and I certainly don't have the time or resources to answer those questions, even the very last one.
> What does "regulated" mean to you?
I don't understand the question. In the case of IT security, I can think of many ways to do it: Liability for bad security, rules requiring good security , etc. I don't know enough about regulation to know what works and in what situations, but some minimal rules and liability sound good.
The internet exists as an information resource that people need to be able to sift through themselves, not something that governments or other self selected groups decide to arbitrarily censor for whatever selfish reasons they have.
As if that wasn't a problem outside the web. Defenders of democracies like to dream about "transparency and understanding".
Chomsky has been warning about the dangers of mass media turned propaganda machine since 1980's. Hell, George Orwell bitterly quipped that history ended in 1936 and everything since then has been propaganda. 
Democracy is a lesser evil, which many people born into it fail to appreciate because their imagination does not render the greater evils realistically.
What gives you the impression that Corbyn is against democracy? In fact, he participates in the 'democratic' system of his own party and of the UK. I personally refuse to participate at all in the bourgeois democracy, but that doesn't mean I'm against democracy as a principle.
Because I have the impression that there's nothing I can say that will make you consider my arguments -- as I live in a democracy --, or to even think about them -- as you also live in a democracy, and as you seem to have accepted as a principle that "democracy is a lesser evil".
It also so happens that I'm dumb enough to have wasted time listening to arguments from neoreactionaries living in democracies and thinking about them. But mainly I think the test should be empirical.
Ad network business is a big balloon soon to explode as the amount of actual customers you get is so little. Guess most advertisers know this but still publish ads because they're not too expensive to not do. When is the last time you willingly clicked an ad? I believe in the past ten years I only willingly clicked less than five ads.
And if the objection is to people being misinformed by clickbait headlines and sensationalised takes on events in articles, I'm struggling to imagine anything that could be more inclined to worsen that situation than increasing the proportion of respectable news organisations that tell people they can pay a monthly fee to access the articles behind the headlines or bugger off and read Breitbart's take on it instead...
I often hear many of the same people fighting "against government overreach in surveillance laws" (as Berners-Lee mentions) while at the same time advocating more legislation to govern information use/misuse on the web. I don't think it's realistic to expect government overreach to magically work where we want it and stop right where we don't.
Many of these problems aren't on the forefront of most people's minds (yet), but as the issues become more publicized and people begin to understand their importance, then we (as in "the people", not the government) will have a greater voice - and more importantly, power through informed choices - to make a difference.
Misinformation spreads everywhere not just the web. Who decides what is "misinformation"?
All speech and information is political, because man is a political creature. Who decides what is "political"?
His first point about losing control of our personal data is right on though.
Even so called "heroes of the web/freedom" are on the "fake news bandwagon".
What the hell have we come to when this is considered enlightening discourse.
We're all in deep shit and this is a taste of things to come this century.
We almost have the tools now:
* cheap or free blog hosting with easy markup and non-public posts
* self-hosted commenting with a URL field for commenters, enabling discoverability
* friendly RSS readers (I use NetNewsWire)
* password management built into most systems and/or browsers, to keep track of individual logins
There's work to be done to make it more user-friendly, but all the tools are there.
Is this anything but opportunistic scare-mongering?
"Spy agency own spy tools. Wouldn't it be scary if they used them on you?!?!?"
As far as I'm aware, the only documented case of inappropriate tool use is overly broad selection criteria on legitimate pipelines of information (and related abuse of access to that data) -- the Snowden leaks. That was a doozy with serious constitutional implications.
But that's substantially different than "they're hackin muh TV!" just because the CIA developed the ability to as part of their mission to spy, which I dont believe we've seen evidence of indiscriminate use.