Hacker News new | past | comments | ask | show | jobs | submit login
Accessing the Internet by Email (2002) (faqs.org)
27 points by ecliptik on Oct 1, 2021 | hide | past | favorite | 9 comments



In ‘95, Universal Access* offered a web fax service: you could send a fax requesting a URL and get the result by fax. Each link had an index so you could fax back a request to follow a link. Some people actually used it!

RMS used to read the web by mail — perhaps he still does.

* co founded by Brian Fox of Bash fame.


> RMS used to read the web by mail — perhaps he still does.

He still does [1]

> "I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it (using konqueror, which won't fetch from other sites in such a situation)."

[1]: https://stallman.org/stallman-computing.html


My first encounters on the early internet were through a mail server. You’d dial up, send whatever you had in the outgoing folder, download if any new mail was awaiting you on the server then hang up. Very quick and prompt for that time. It was actually fun sending commands to listservs by email and receiving the responses at the next dial in. I was a teenager and at some point I was in trouble with the phone bill. Good times though.


This was my primary way of accessing the web in the early 2000’s. Juno was a free dialup email service and that’s all I had access to.


Sadly enough, this URL is not accessible if you set your user-agent to the empty string.


I hate useragent strings, but with the current state of the Web, your UA sticks out more if you omit the version header rather than copying a common one.

That's still no excuse for a webserver to deny requests by (lack of) UA string, but I doubt that what you're doing is beneficial for any reason.


First I tried it downloading with wget, but it did not work. Ok, maybe they block wget for some particular reason. Then I removed the wget agent, and it did not work either, which is strange. Are you supposed to copy one of those behemoth UA that pretends to be every browser since 20 years ago at the same time?


They probably block commonly-abused user-agent strings such as wget and a blank string. A custom string may work, or just spoofing a browser.


curl works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: