

DNS got you down? Share ip addresses with freedom.txt - mvanveen
https://github.com/mvanveen/freedom.txt

======
VMG
_An open internet is more important than corporations. They have failed us
because of their natural greed. They must be changed._

To be fair, evil greedy corporations also prevented SOPA

 _An open internet is more important than security, copyright infringement,
terrorism or child pornography. We will not be fooled by the strategies of
fear employed by those who wish to censor us._

This is the kind of false dichotomy the enemies of an open internet want to
create. The point is that we can fight all these things and _still_ have a
free internet.

Finally, the URI to IP table is confused. Example

    
    
         http://hoop-la.ca/freedom.txt: 70.66.72.121
    

But <http://70.66.72.121/freedom.txt> fails because of the missing hostname.

This is slacktivism at its worst.

------
iradik
I like the concept, but why does this map http addresses (<http://server.com>)
to ip addresses? Why not instead have a table of ips and domains (like
/etc/hosts) so you can then just sync the freedom.txt file into your hosts
file? That's what the hosts file is designed for after all.

This has the added benefit of working with virtual hosting configurations
where the browser has to pass the Host header to the server to get a web page.

~~~
mvanveen
The script grabs the domain of the uri in question, so it's effectively
mapping the domain to the url.

To answer your question more specifically, the freedom.txt format started in
this thread: <http://news.ycombinator.com/item?id=3498505> , so it links to
the uri of a freedom.txt for historical reasons.

/etc/hosts integration is in the works; you're the 2nd person to recommend it
so far (the first being my roommate ;-) ).

What's up now is just a quick hack to get momentum for the project in the
meantime. If this takes off, the innovation will be social rather than
technical, so I thought I'd spend 5 minutes to see if it's worthwhile.

Thanks for the input!

 _Edit_ : I'm working on /etc/hosts integration right now.

~~~
Natsu
I know the web moves fast, but you got a good laugh out of me for calling
something someone came up with in a thread yesterday "historical reasons" :)

~~~
mvanveen
Fair. Probably not the best wording on my part.

For context, I've been in contact with the original poster and did not want to
separate our efforts. This idea was suggested in the last thread, and I
thought it was worth a try.

Leveraging what's already out there seemed like a good idea.

------
gioele
Two nitpicks:

1\. `/robots.txt` is here to stay, but everything else should use the `/.well-
known/` prefix so that they do not squat the root URI:
<[http://tools.ietf.org/html/rfc5785>](http://tools.ietf.org/html/rfc5785>).

From the RFC:

> To address this, this memo defines a path prefix in HTTP(S) URIs for these
> "well-known locations", "/.well-known/". Future specifications that need to
> define a resource for such site-wide metadata can register their use to
> avoid collisions and minimise impingement upon sites' URI space.

2\. alternative DNS root systems have been tried in the past [1] and the IETF
has always spoken about them on technical and political grounds. The ORSN was
probably the most autoritative alternative DNS root, backed by Vixie and
others, and stopped operating few years ago.

From the Wikipedia article

> The founders of ORSN are concerned that ICANN is ultimately controlled by
> the government of the United States. Their aim was to limit the control over
> the Internet that this gives, while ensuring that domain names remain
> unambiguous. They also expected their network to make name resolutions
> faster for everyone.

[1] <http://en.wikipedia.org/wiki/Alternative_DNS_root> [2]
<http://en.wikipedia.org/wiki/Open_Root_Server_Network>

~~~
zeppelin_7
Much like /robots.txt and /humans.txt I think this belongs on root. I
understand that we do not want to trash the root URI. Perhaps this can be
merged with the humans.txt instead? It can be a separate section in it.

I also dont think this is necessarily an alternative DNS. Much like the hosts
file that circulated the internet, this is is just a way to track some major
IP addresses, that can be queried when the world ends.

------
mprovost
Just start some alternative root servers. There's nothing that makes the
current roots authoritative except for convention and a hardcoded list of IPs.
It's not even hard to start up a server that can act as the authoritative
source for blocked domains and pass through requests to the real roots for
everything else.

------
samarudge
Off topic, but can anyone tell me the history of

[SOME DATA FORMAT]

\---

[SOME OTHER DATA FORMAT]

?

I'm sure there must be a better way to have multiple data types in a single
file than just searching for '\n---\n' and splitting, no? What's the general
principal when one data-block has a '\n---\n' in it's data?

I first saw it in Jekyll, I believe they call it 'Front Matter' (Though their
implementation seems to be using a '---\n' to start the block) but I can't
find any more information on it. Is it some sort of
standard/specification/thingy I've completely missed?

~~~
Terretta
Some email clients also chuck anything after a line with just two or three
dashes. I believe I recall that in things as old as Eudora or as recent as
Entourage.

Used to be that you'd put \n-- \n before your sig, and then replies would omit
the sig.

ADDED:

See also <http://en.wikipedia.org/wiki/Signature_block>

"The formatting of the sig block is prescribed somewhat more firmly: it should
be displayed as plain text in a fixed-width font (no HTML, images, or other
rich text), and must be delimited from the body of the message by a single
line consisting of exactly two hyphens, followed by a space, followed by the
end of line (i.e., "-- \n").[1] This latter prescription, which goes by many
names, including "sig dashes", "signature cut line", "sig-marker", and "sig
separator", allows software to automatically mark or remove the sig block as
the receiver desires."

1\. The links in Wikipedia reference RFCs dating back to 1994.

------
rwmj
I can tell you from direct experience this doesn't scale. That was in 1991
when the internet was a much smaller place.

What's with mapping URLs to IP addresses anyway? That's completely wrong ...

