And linguistic style reflects how a person thinks (so the theory goes). So when you mimic someone's linguistic style, you may find yourself succumbing to their way of thinking.
The theory continues that different programming languages reflect different linguistic styles and thus your choice of programming language can affect the way you think.
As an example, I do a large amount of text processing work using sed. Interestingly (imo), sed is a language designed by someone who had a degree in Psychology.
As a result, to some extent my way of thinking would probably differ from someone who uses, e.g., Perl as their preferred language for processing text.
Or vice-versa, i.e., one's thinking style affects one's choice of programming language. I believe this is why some programming languages just never clicked with me. My brain definitely resisted Perl...at first....
No doubt. But would you agree that the more you use someone else's language, the more your thinking adapts to their way of thinking, as evidenced in how they designed the language?
A comparable alternative is never going to free for everyone. Someone has to pay the Amazon fees for S3 use. Isn't it true that the few Dropbox users who do pay for it support all the ones who do not?
My understanding is that copyright is enforced by the copyright holder, not the state (unless of course they are the copyright holder).
As such, infringement is not a "crime". Hence it is not "illegal". There is no violation of copyright "law". Infringement is a violation of someone's copyright, a right to sue granted by the government. A private right of action is created.
Simply put, the copyright holder can sue for damages. They may or may not choose to do so.
Is this incorrect?
Is there a criminal statute covering such copyrighted works as discussed in the article?
What if users are not completely ignorant of the geographical locations of the IP addresses they choose to store and use, and if we allow them to make their own choices?
Determining which is the closest server or the most responsive is not rocket science.
A good HOSTS file coupled with a good local cache is in my experience faster than any DNS service. But it's relatively rare to see users setting themselves up this way. My guess is that is not due to difficulty, it's due to ignorance. Maybe even peer pressure.
The "experts" will tell you to use a shared cache, exposing you to all manner of security flaws.
Ask yourself how many lookups the average user makes in a day to the DNS?
How many of those lookups are for the same sites, day after day?
How many times do the IP addresses for these sites actually change?
Finally, ask yourself how many of those lookups are for IP addresses not attached to any website you will ever visit (i.e., they are for serving advertisements, behavioural tracking elements, etc).
You can also restrict queries only to authoritative servers.
This is something I threw together as an exeriment. For me, it works beautifully.
Then the "experts" will tell you we need DNSSEC, to counter the problem posed by using shared caches.
The impetus for its resurgence is the use of shared caches and "cache poisoning". Do we have to use shared caches? No.
DNSSEC has become like security theater - the simple fact is that no one is accountable for the information in the current DNS except the site owners themselves.
All the DNSSEC proponents can do is pray that more people will start using it. It's a cash cow for some consultants.
The other simple fact is that the most important TLD servers do not change IP addresses very often. They are more or less static.
Anyone can download a copy of those numbers and store it.
Does it matter if each individual record is signed? It only matters if you like to do recursion. Maybe what matters more is that the file you download is itself signed.
For the DNSSEC system to work, to restore some confidence in shared caches (which may potentially be censored by SOPA-like legislation), the most important people who need to use DNSSEC are the authoritative servers for the websites themselves.
Will they undertake this? Weighed against the triviality and increased security to the user of using a local cache and HOSTS file, thereby avoiding cache poisoning altogether, is anyone going to bother with learning DNSSEC?
DNSSEC is a huge burden. Unless of course you offload responsibility to someone else. Cha ching. But no one is going to be more secure using something they delegate to someone else and cannot themselves understand.
To someone who wants to learn, I can explain a HOSTS file and how to do non-recursive lookups much easier than I can explain DNSSEC.
We have wider acceptance of EDNS. And there are people advocating TCP. Obviously some people really want DNSSEC to take off. Why?
If the Internet can handle the load of EDNS and TCP, all for a simple number lookup that otherwise fits in 512 bytes and requires no connection setup/breakdown, querying authoritative servers instead of doing the inherently insecure recursion routine with other people's caches is not going to bring the Internet to its knees.
algoshift, you are absolutely correct. Decentralisation is the way to go and, imo, is in the true spirit of the Internet.
How much of the $87M went toward legal fees? Lots of negotiation here. The technology is trivial by comparison.
Thinking out loud...
Has any publisher managed to control their content in the digital age?
Academic publishers still manage to keep their content under control. How?
The cost of a subscription is exorbitant. Only large entities can afford it.
The large entities, e.g. universities or large firms, pass on the cost to their customers, e.g. students or clients/customers.
There's also the small fact that the content is not marketed heavily and in high demand among the general population. Unlike music.
Perhaps music should only be marketed to customers who can afford it: large entities.
It wouldn't stop piracy by individuals but it would ensure the existence of some customers who were willing and able to pay, and to refrain from piracy.
Imagine a situation where working for a large firm or attending a university gives you a temporary subscription to a vast catalog of not only academic journals but also major label music. It would be a huge perk.
Yes there would be piracy, but the large firms would have an incentive to try to stop it. They know who their employees and students are and could no doubt do a better job preventing piracy than the RIAA lawyers have done. Whatever might happen, the labels would still make money from exorbitantly-priced subscriptions.
gvb is correct. Local cache is the way to go. Some have known this for a long time. SOPA could be a blessing in disguise if the meme spreads. Cache poisoning becomes irrelevent.
If SOPA neuters search engines, maybe users will resort to their own scans of port 80 (or whatever we designate to the "public" port) to find websites. The legality of scanning, and what is and is not public, may become a hot issue. We may get some legal clarity.
There's a decent comment to the blog post. This is not rocket science to understand. This could (unforseeably) spell the end for ICANN and the vast domain squatting business.
A user can attach any name he wants to an IP number.
If an IP hosts multiple sites, you'd need to figure out the correct http HOST header to send it, unless you're use a static IP for every site. Most web hosts are using name-based virtualhosts: https://httpd.apache.org/docs/2.2/vhosts/
We implement both name- and ip-based vhosts, but only for sites with dedicated IPs (read: sites that need a dedicated IP for SSL.) You can do SSL on a shared IP, but it gets more complicated. Cloudflare does it by using a cert with multiple "certificate subject alt names", but security / site spoofing would still be an issue if you're ditching SSL at the same time (which would be a good idea).
This is the /real/ web 2.0...maybe even web 3.0: peer-to-peer hosting without ICANN or Verisign and co. We can do it, and people are working on building it right now. Reddit has Meshnet and NameCoin seems like a good idea. Now we just need a similar solution for SSL and a good way to host/update things. The future looks bleak for ISPs, CAs, registrars, and non-free countries.
There is a solution for SSL. Of course it's not SSL, it's more secure and it's faster.
You can set several "domain names" (hostnames) for a server.
There is a working prototype.
Seek with open mind and ye shall find.
I agree what you allude to is the real web 3.0 but, imo, it's not accurate to call it "web 3.0" because it's more than just "the web". Lots more than just web servers will run on a properly constructed peer-to-peer platform.
The public "web" is for Google and Facebook, their marketers and massive surveillance.
Err, perhaps I'm missing something, but what does this comment mean? Anything?
You're new to Hacker News, so I'm not going to downvote this, but a piece of advice: this sort of vague, useless-without-context comment adds nothing to the dialogue and will be driven down before you can blink.
Yes I agree that local cache is the way it will go, but I do not think think that SOPA will be a blessing in disguise. In fact, I think the internet will begin to more closely resemble the television industry, with a large technical underground. Perhaps some P2P DNS service will become popular, however if I recall correctly the bill outlaws blacklist evasion software.
SOPA does not have to pass to achieve the effect I'm suggesting. It will simply open people's eyes to the centralisation that is ongoing. The "single point of failure" will become evident and people will start to think.
No software is needed. Nor is a DNS. All modern PC OS's have the necessary capabilities built-in.
To connect to a website all that is needed is the knowledge of an IP number, a port (almost always 80) and, optionally, a hostname.
Is it really possible to prohibit this knowledge?
Imagine a world where there are "forbidden" phone numbers. However no one is forbidden from dialling them. The sole prohibition is against telling anyone what they are.
This is what SOPA blacklisting purports to achieve.
General purpose PC's will only disappear if the market for them disappears.
Ultimately it is
1. the nerds who educate (or fail to educate) the consumer market about what is possible using a general purpose PC and
2. the consumers who buy electronics, including general purpose PC's,
who are in control of the future production and general availability of them.
Common sense says that factories will only produce what can be sold. They respond to demand.
Touchscreen, keyboardless tablets and "smartphones" that cannot be connected and controlled with a more flexible general purpose computer are limiting not enabling.
This is however not the message being sent to consumers.
> General purpose PC's will only disappear if the market for them disappears.
I think this is the key point.
In some sense, the market for them has been disappearing. The rise of web and mobile apps has shown beyond any doubt that a lot of traditional software is far too big and complicated for a typical home user's needs, and that Joe Public can in fact be quite happy sharing his life story on a cloud-hosted blog, keeping up with his friends via Twitter, playing simple puzzle games on Facebook, and watching streaming movies on his iPad. Notice that all of these are just variations on consumption/communication using on-line services. Heavyweight tools like e-mail and word processors just aren't necessary for what most people care about in their daily lives at home. In fact, I suspect that for the average home consumer, open access to data (both their own personal information and multimedia content they have paid for) and the whole walled garden debate are going to be far bigger issues than open access to general purpose hardware and software.
On the flip side, there will always be enthusiasts who do want more flexible hardware and software for their own enjoyment. All of those activities I mentioned above use software written by geeks, running on infrastructure built by geeks and funded in large part by businesses. And businesses themselves have widely diverse requirements and build many software tools both in-house and to sell to the public and/or other businesses. In short, the entire global IT economy is built around geeks and business, and geeks and businesses need general purpose computing. No amount of lobbying by special interest groups is going to beat the combined might of a global economy that now depends fundamentally on progress in IT for its recovery and future success. If a few sites on the scale of Google and Facebook do go dark for a day in protest against SOPA, I think a few politicians, a lot of Big Media executives, and the entire Web-surfing population of the world are going to learn that lesson very quickly.
"In some sense, the market for them has been disappearing."
Agreed! That's what I thought the keynote would be about before I began watching. Users cared about general purpose computers before, because the only way you could accomplish a task was to run it on your own computer. Now, the thin clients are essentially here, and the ability to run an arbitrary program is not very important to many people.
Or so they think, anyway. I'm worried about the point where devices at home are not Turing-complete, and they just connect to authorized services. Only certain companies would really have the ability to program anything at all (because they'd be the only ones with real computers), and it would be easy for the government to step in and control them.
VJ talks about something called "Evolutionary Pressures" at the end of his Google Talk a few years ago. Watch the video if you haven't already.
SOPA if it passes may create such pressure. It's main target is DNS.
CDN's like Akamai rely on universal use of one DNS, the one that SOPA aims to regulate, to accomplish their kludge.
Food for thought.
I prefer "Chicago" to any of the other alternatives.
End-to-end can be realised. Overlays work for small groups. Small groups can share with each other. Networks connecting to other networks. It's been done. ;)
I stupidly downvoted you (SOPA! GAH!) but you made a good point, so, sorry.
Generally one of the reasons I don't freak out about laws regulating the Internet is that the Internet as we know it today is rickety anyways.
In the last 10 years we've seen the monumental shift (predicted in the late '90s but widely scoffed at) from ad hoc network protocols and native client/server implementations to a common web platform. Nerds still recoil a little at this, thinking about the native/ad-hoc stuff they still use (like Skype), but if you look at the industry as a whole, particularly in the line-of-business software development that actually dominates the field, it's all web now. TCP ports are mostly irrelevant. If you were going to start a new app today, would it be a native implementation of a custom protocol? Probably not!
One of the things that got us to this state was in fact regulatory: default firewall configurations that don't allow anything but web out, and disallow arbitrary inbound connections.
Over the next 10-15 years, I'm hoping we'll get similar nudges away from first-class addressing only for machines (which are less and less useful as an abstraction for completing tasks using technology) and towards first-class addressing for subjects, interests, content, &c. This is an insight lots of people have had, from David Cheriton & TIBCO in the '90s through the RON work at MIT through VJ's work at PARC & so on.
I wrote off Lessig for a bunch of years after reading _Code_, but I think he fundamentally has it right in the sense that implementors have as powerful a say in how things are going to be regulated as legislators do. America had the Constitutional Convention after the Articles stopped scaling; the Internet will have little blips of architectural reconsideration when the impedance between the technology and what people want to legitimately do with the technology gets too high.
(I'm trying to make a technical point here, not a political one; I support copyright, and am ambivalent about SOPA.)
With the widespread use of anycast, does "first class addressing for machines" even matter any more?
In situations where anycast is used, how do you even know what machine a given address to connecting to?
RON was a step in the right direction, imo. With small overlays, MAC addressing comes into play and it becomes a little easier to know what machines (not simply what "addresses") you are really connected to.
Yes, because conceptually you're still talking to a computer (it just happens to be a globally load balanced computer). It's still the unicast service model, and it's still fundamentally about building a channel between two operating systems.
Imagine if you dialled a full telephone number, including applicable country code and local regional code, and depending on where you were dialling from, you reached a different telephone in a different geographical location.
As long as it's an answering machine and the message is the same at every telephone reached by this number, it does not matter.
But as soon as you want to reach a live person, and not simply "content", then what?
Is end-to-end about accessing "content" or is it about communicating with someone operating another computer?
I wouldn't want to noodle on this too much. I take your point. Anycast abstracts away some of the notion that you're talking to a specific computer. But the unicast service model is inherently about computers talking to each other. Many, maybe most, of the most important Internet applications aren't about 1-1 conversations, or if they are, they're 1-1 conversations in special cases of systems that also work 1-many.
One opinion is that a very important Internet application will inevitably be 1-1.
Who did the FCC just hire as their new CTO? What is happening to POTS?
1-to-many systems, hacked to give an illusion of 1-1 conversations, e.g. smtp middlemen, social networking's http servers or twitter-like broadcast sms, are what we settle for today, but, imo, this is a limitation not a desired goal.
Please keep in mind that it's not solely IP law that is the problem. It is how the law is being used. And who is (and is not) using it.
By and large, those who hold the lion's share of the world's enforceble IPR and are doing the lion's share of enforcement of that IP are not indviduals of average income.
If we talk about "civil liberties" maybe we should talk about every individual's rights to register IP and to enforce their IPR. Under the current systems, ownership and enforement of IP is concentrated in the hands of the few, not the many.
As you might guess, most of the world's IP is under corporate control by large corporations.
Why can't the average individual own and enforce IP with the same effectiveness? Examine the systems for registering and enforcing IP and see why.
The theory continues that different programming languages reflect different linguistic styles and thus your choice of programming language can affect the way you think.
As an example, I do a large amount of text processing work using sed. Interestingly (imo), sed is a language designed by someone who had a degree in Psychology.
As a result, to some extent my way of thinking would probably differ from someone who uses, e.g., Perl as their preferred language for processing text.