Hacker News new | past | comments | ask | show | jobs | submit login
68k.news: A Netscape 1.1 makeover of Google News (68k.news)
404 points by gripfx 21 days ago | hide | past | favorite | 140 comments



I think my favorite part is just how short some of these articles really are once you remove all the nonsense and extra crap in the web pages.

Some articles are actually... 8 sentences. That is it. How on earth does it then take 10 seconds to scroll and parse all the fake inserts to finally realize that this is a poorly researched snippet masquerading as news...


Few years ago I realized a wild chunk of news websites are mostly tweets coated in menus and [social share] button.

This part of the web makes me so jaded about 'progress' I'm into woodworking now.


> I think my favorite part is just how short some of there articles are...

In an attempt to enjoy this effect more broadly, I have Reader Mode set to enabled by default on Safari mobile.

On Firefox desktop I often use the Reader View button on news stories. There is an extension to enable this by default, but I have a hard time trusting browser add-ons.


What do you think about recommended add-ons by Firefox, for example Tranquility Reader https://addons.mozilla.org/en-US/firefox/addon/tranquility-1... ? Their code is supposed to be checked by Firefox team, so it's probably safe to use.


Oh wow, did not even know you could enable by default on Safari mobile. I really like this!


This is also a great way to work around many paywalls, as when reader mode is on by default, it kicks in before the JavaScript that throws up the paywall.


Chrome also has it under chrome://flags/#enable-reader-mode

its pretty great.


> I have Reader Mode set to enabled by default on Safari mobile

I didn't know you can do that, thank you!


It’s reasonable to have short articles in a newspaper, where you want to fit in a bunch of short factual stories, or on a newswire where you just want to quickly send out some facts before competitors. In the former case you just put lots of stories on one page. In the latter case, it’s often expected to be short.


I think short articles are great! But it is crazy how much crap gets added to nice short articles on most sites.


I guess a lot of the BS filler text is added just so they can fit more ads around the text, huh? The goal for online publications isn't to inform you and save you time... It's to make you click on ads. They want to have you see as many ads as possible, and make sure that you stay on the page as long as possible. If the news article was just a one-paragraph snippet, you couldn't fit 8 ads on the page, it would look absolutely laughable.


A newspaper or news website would source most of their journalism from Reuters or Associated Press, and then fluff it up to fit their editorial stance.

The same articles would seem longer in print because they're formatted in such narrow columns, wrapping around images. There's some thought that goes into the layout.

Of course, the internet breaks that particular illusion. And I'm sure that if a marketing department could do to their printed paper what they do to their website, design and readability be damned, they would jump at the chance.


Where are you seeing the short version of the article? Each link takes me to the original.


The links try to parse out the text and show that, with an additional link to the original article (seems like most links fail to parse however)

A successful Example

http://68k.news/article.php?loc=US&a=https://news.google.com...


Nostaliga really kicked in here - seeing things like this for the first time, and feeling the unfurling of the future and a thousand new ideas in front of you, one so new and beyond all of your sci-fi expectations, and yet so real.

I feel so incredibly fortunate to have been old enough to see and understand the start of all of this, and later, to be a part of all of it.


I appreciate hard-hitting, important news like this: https://imgur.com/a/Yh0rZ12


I’m with you. As soon as that page loaded I had warm flashbacks to my first days exploring the early web with Netscape. Incredibly magical time.


I don’t think I had any feelings of nostalgia related to this at all, even though it must have been about when I started using the web. I guess I was still too young to get really attched to it.


When I looked at it before it had a Netscape-gray background; now it's white (updated? different browser? experiment?) and so doesn't capture that same feeling at all


Great! For even better results, please, set the background-color to `#C0C0C0`. (Netscape default. However, I'm not sure, if this was also the default on Windows, as well.)

Compare this bookmarklet: https://www.masswerk.at/bookmarklets/netscapify/


Agreed. Blue text on white background is jarring. And I was wondering what the original Netscape Grey was!

Argh. Hoping "Godzilla vs Kong" reviews were going to be better. When will Hollywood learn the secret to a good kaiju film = less humans, more monsters ;)


Only true kaiju on the web: http://home.mcom.com/MCOM/images/tiles.gif ;-)


Back in 1995, This is it !


You can experience this on an (emulated) 68k Mac in your browser using Oldweb.Today: https://oldweb.today/?browser=ns3-mac#http://68k.news/


"Sorry, OldWeb.today can not run in this emulator in your current browser. Please try the latest version of Chrome or Firefox to use this emulator."

Funny that an emulated Mac hates Safari.


First impression: "Wow, this warms my heart."

Second impression: instinctively tries scrolling with trackpad _help why isn't it working_

This really made my day, thank you for sharing it.


Oh wow -- this takes me back to when I first experienced the graphical Internet on a Macintosh IIsi! Thanks!


My favorite part by far is when you click on a link you see just the plain text of the article sans distractions.

Edit: also it shows a few key news articles with related articles. This means I'm not infinitely scrolling which is nice.


>My favorite part by far is when you click on a link you see just the plain text of the article sans distractions.

You may also like http://lite.cnn.com/en

It's so refreshing to have pages load instantly. Websites get so bogged down with loading resources from 12 different places. It'd be nice if a static webpage was the default and every change that slowed down loading was explicitly laid out to stakeholders in terms of marginal load time and resources required.


What's sad about that page and the articles it links to is that it could be soooo much better designed and still be clean and fast.

The main list could have dates and times and categories, so it's not just a dump of text links.

Each article could have a reasonably sized image or two without compromising the load speed.

Finally, a single "sponsored by" link could be included in the page to provide revenue via advertising.

It's insane that media companies feel that their sites need to be a bloated mess or barely there, and nothing in between.

Regardless, the fact that the URL isn't served via https is an indicator to me the this is a forgotten service and will eventually disappear the next time CNN does an overhaul of its web servers.


I guess the moment you start adding those "features" the whole process starts again and in six months you might end up where we are today if you let your feet (fingers) run (type) away with you.


Chrome has a "reader mode" semi-hidden feature which does this to any article on any site, and in my experience works perfectly 99% of the time. On mobile it's a huge saver, especially since Chrome by default doesn't have ad-block, and articles on mobile nowadays have become utterly unreadable with all the crap they throw at you.

This is another reason I like AMP in general, despite all of its issues, it generally is much cleaner [1] than the non-amp alternatives [2]

[1] https://i.imgur.com/qYd1mCX.png

[2] https://i.imgur.com/SwK6unL.png


How do I turn on reader mode for Chrome on Android? (Edit: Ah, seems to be the "simplified view" under accessibility settings.)


Thanks for the edit! I remember doing it way back when it was a flag, using chrome://flags, but nice to see it's now available through the options.


I had no idea this existed. I appreciate the link.


I work on something called https://txtify.it that you can prefix onto almost any article URL to get a plain text-only version, e.g.

https://txtify.it/https://www.nytimes.com/2021/03/29/nyregio...


Love it! It even produces content that supports reader mode (in Firefox, at least.)

Now how do I automate this? Maybe a simple bookmarklet?

If you paste:

var url = document.URL ; var title = document.title ; window.location.href = "https://txtify.it/" + document.URL;

into: https://caiorss.github.io/bookmarklet-maker/

Then drag the bookmarklet to the bookmarks bar, you've got a one-click textify option :)


Very nice! I should add a bookmarklet to the home page.


Cool project! My only complain is it defaults font to Courier (monospaced) which is harder to read and takes more place. You might use ‘Georgia’ which is web safe and pretty legible. Or of course, Times New Roman (just like site on this post)

Please refer:

https://practicaltypography.com/monospaced-fonts.html

https://practicaltypography.com/body-text.html


Since it is literally plain text (as opposed to HTML) it will default to the monospaced font set in your browser.


You're right, but I think this depends on the browser too. I actually pass a stylesheet in the HTTP response header to make text appear white on a black background. Firefox respects this, but Chrome doesn't (at least for plain text files):

  link: <https://txtify.it/dark.css>; rel="stylesheet"
Will have to test again when I get some time to see what the options are. Might be a bit of a hack as there really aren't any HTML elements to target, so it might be that Firefox applies the CSS after inserting the text into an HTML template.


I'm curious where the data get fetched from. The Author mentions that Mozilla Readability and SimplePie are used.

Readability to parse the content. SimplePie to fetch the data (I assume). Dat from RSS feeds?

In case you want to make something similar, I recently wrote a blog on where you could get news data for free [1]

(self-promo) I'd recommend to take a look at my Python package to mine news data from Google News [2]. Also, in 3 days we're releasing an absolutely free News API [3] that will support ~50-100k top stories per day.

[1] https://blog.newscatcherapi.com/an-ultimate-list-of-open-sou...

[2] https://github.com/kotartemiy/pygooglenews

[3] https://newscatcherapi.com/free-news-api


Reminds me of the light CNN version: http://lite.cnn.com/en


or NPR: https://text.npr.org/ which I like more b/c the targets of the links have a line width that I find easier to read.


Interview with the founder on the Register[0]

[0] https://www.theregister.com/2021/03/29/google_news_netscape_...


Thanks for the link. Technical part from that interview:

> On a technical level, the site obtains stories through the existing Google News RSS feed, which are then processed with some PHP trickery. "Google News has a very nice RSS feed, for each topic, language and country. So I thought I could connect to that feed, and write some code to simplify the result way down to extremely basic HTML, targeting only tags available in the HTML 2.0 specification from 1995," said Malseed.

> "So I used a PHP library called SimplePie to import the feed, and wrote some PHP code to simplify the results into a nice front page, using Netscape 2.0.2 on my 1989 Mac SE/30 to make sure it loaded fast and looked nice. The articles were a little more difficult, because they are on all sorts of different news sites with different formatting.

> "So I found that Mozilla has an open-source library called Readability, which is what powers Firefox's reader mode. I used the PHP port of this, and then wrote a proxy that renders articles through Readability, and then I added some code to strip the results down even further to extremely basic HTML."


This is similar to a site that I built! http://feather.news

Best viewed on mobile and you can optionally use a version without images by clicking the link at the top right of the page.


I like the layout of yours better. But I still like 68k's feature of giving you the readable version of the stories too.


I like it. I'd love to chat about newsapi.org

Let me know if you have some time


What a breath of fresh air. I forgot how human-friendly the internet was before ads invaded.


I know I'm the only one who's reading news on the Kindle Voyage, but I'm definitely adding this to my bookmarks list on the e-reader. Super cool!


This is beautiful. Everything should have a text mode like this.

I should make it an option for my own site, and I will! Thank you for the inspiration.


This is great. I have been looking for a "world news in the style of techmeme" that isn't the drudge report.


And this is how the web supposed to be.


And it looks fine and is perfectly functional. Maybe just tweak the background.


Looks fine? It is a big blob of undistinguished text. If you squint your eyes everything looks the same. The lack of column widths means your eyes have to do a lot of scanning especially on a bigger/wider screen. The lack of color/font/size differentiation means there's almost no information hierarchy to it. While we all might complain about advertisements and loading times, I for one are very glad we have moved on so significantly.


It is indeed!


I didn't know the <small> tag was that old! Also thought a page from back then would be ascii instead of utf8.

(Also, I thought every page from that era was required to have at least one <blink> tag, and possibly an "Under Construction" image.)


Where is the feed this is based upon from?

Google news rss seems to be different and is full with amp links:

https://news.google.com/rss?hl=en-US&gl=US&ceid=US:en


Regarding the amp links, they could parse the results and try to replace the amp links


I assembled a similar decruftifier for the Washington Post specifically, using html-xml-utils (https://www.w3.org/Tools/HTML-XML-utils -- and some sed/awk) to strip only core article content & metadata (head, byline, dateline). Result was typically <5% of original HTML.

I've come to realise that most online commercial publishing does not even use bold within body text, giving another filter trigger for stripping cruft.


I know this is supposed to be retro but I use NPR text mode always. No pics, just text, its glorious.

https://text.npr.org/

Far easier to read since the length of the line is absolutely perfect. Pro tip: https://practicaltypography.com/line-length.html

That said - something is wrong with NPR, a bunch of Lorem Ipsum links :)


I would love it if this kicked off a "slow food" web movement where we all built a ton of services usable by retro hardware.


Comparing this to the normal Google News, we have a load time of 450ms vs 6500ms on a fairly beefy workstation. I have a new bookmark..


Reminds me of text only CNN: http://lite.cnn.com/en


Viewing the source of that webpage really takes you back. Plain old HTML. It's nostalgic beauty.

I recently fixed up an old 486 I purchased off eBay but it was bittersweet when I managed to get it connected to the 'net. Most websites were inaccessible due to the lack of support for today's encryption protocols, those that were had numerous JavaScript issues.


I do appeciate some of it, but I don't miss this type of thing:

  <font size="5" color="#9400d3">
Though I understand why they are doing it in this case.


Definitely was a lot of work without CSS.

The <table> layouts are definitely not missed by me.


Were you just using the browser/OS it came with, I guess like Windows 3.1 or 95?

Given their well-publicized insistence on building for a ton of obscure arches, I'd expect you could run modern Debian on such a machine no problem, with a modern web browser. Might be a little slow, especially if you stick with the original disk, but should be perfectly usable.


> I'd expect you could run modern Debian on such a machine no problem...

Nope. Current builds of i386 Debian require a Pentium Pro or later -- I believe it's because they're compiled with the CMOV instruction, which wasn't present in the Pentium or earlier.


Ah, good point. Looks like Debian Jessie might have worked for it? In any case, distrowatch suggests a few others like Alpine and TinyCore that might have the proper support.


Yeah I had installed Windows 3.1 and tried IE and Netscape Navigator Gold.

Linux might be a whole other battle to get working but might be a fun project to attempt.


Try NetBSD with Lynx. OpenSSL will last several seconds, but it will work.


Except this bullshit:

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 2.0//EN">
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

    <html>
Back in the day, you only needed:

    <html>


Nitpick 1: That DTD is for HTML 2.0, which was published in 1995. If that does not qualify as "back in the day", I don't know what does.

Nitpick 2: <meta> goes inside <html> (inside <head>, really).

Nitpick 3: The <meta> tag is only a band-aid for shitty webhosting where you cannot access the webserver config to make it send the correct Content-Type in the actual HTTP response headers. The modern <!DOCTYPE html> instead implies a default of UTF-8 which works well for most.


Nitpick nitpick: the html doctype doesn't imply UTF-8. Valid modern HTML documents must be encoded using UTF-8, but the standard also requires that the encoding be specified somehow.

> The Encoding standard requires use of the UTF-8 character encoding and requires use of the "utf-8" encoding label to identify it... If an HTML document does not start with a BOM, and its encoding is not explicitly given by Content-Type metadata, and the document is not an iframe srcdoc document, then the encoding must be specified using a meta element with a charset attribute or a meta element with an http-equiv attribute in the Encoding declaration state.

<https://html.spec.whatwg.org/multipage/semantics.html#charac...>


Don't you only need

    <!DOCTYPE html>
    <html>
these days? That's slightly worse but not terribly so IMO.


Oh I actually quoted the website's source. They have that DTD meta crap in there.

But I think you can just do <html> nowadays and it empirically just works. Seriously, screw the anti-DRY people that want me to put some !DOCTYPE or xmlns tags with some W3C links or some DTD nonsense inside ... I should only have to specify "html" exactly once, no more.

If I had designed the spec I would have just made it

    <html version="4.0">
    <html version="5.0">
    <html version="5.1">
Incredibly more readable, and memorizable. A markup language (literally), by virtue of being a markup language, should not be impossible to memorize. Making scary strings like "-//W3C///DTD" part of the spec is counterproductive.


You don't even need <html>. <https://html.spec.whatwg.org/multipage/semantics.html#the-ht...>

This is a valid HTML5 document:

    <!doctype html>
    <title>This is valid!</title>
    <p>Really, it's valid!
Paste it into the validator yourself if you don't believe me: <https://validator.w3.org/nu/#textarea>


That's SGML tag inference at work (theoretically at least, since browsers have HTML parsing rules hardcoded). SGML knows, by the DOCTYPE declaration, that the document must start with an "html" element, so it infers it if it isn't there. Next, by the content model declared for the "html" element (normally obtained via the ugly public identifier that sibling comments complain about), a "head" element is expected, so SGML infers it as well if it's declared omissible, and so on.


In the "old days", web pages were often just the bare content (no html, head, body containers, no DOCTYPE declaration). A few sites also featured just the body tag (and respective content) for setting the background attribute for the page background color.

E.g., this is the entire code of Netscape's first home page:

    <TITLE>Welcome to Mosaic Communications Corporation!</TITLE>
    <CENTER>
    <A HREF="MCOM/index2.html"><IMG SRC="MCOM/images/mcomwelcome1.gif" BORDER=1></A>
    <H3>
    <A HREF="MCOM/index2.html">Click on the Image or here to advance</A>
    </H3>
    </CENTER>
http://home.mcom.com/


I LOVE <center> and still use it sometimes because it's too goddamn complicated to center anything days.

"margin-left:auto;margin-right:auto;" what the F???

"width:500px;margin-left:-250px;left:50%;" what the #$#$@?

Are they kidding?


Wow, I didn't actually know that. That feels...so dirty!


Also utf-8 (or any Unicode) wasn't supported by most (all?) Netscape versions. Falling back to default Latin1 works with English text though.



This is great! I so don't miss the err, personalization and targeted content. If you can track down RSS news sources I'd recommend having a look at Newsbeuter https://github.com/akrennmair/newsbeuter


Very retro with all those <font size="5"> tags, but that's the job of CSS :)

Edit: there was no CSS support in 1.1 :)


At the same time, it's interesting to see those tags used for what otherwise looks like a pretty un-styled page.

Like, part of the premise of CSS was progressive enhancement, where just the semantic structure alone would provide an adequate experience with however the browser might choose to render those elements by default. Basically my question is, if the font size tags were taken out and just bare h3/h4/p used instead, would that still render a usable page on Netscape 1.1? Could you then supply font overrides via a <style> tag in the header which could be applied by later browsers?

Obviously it would be a different kind of experiment as the result would no longer be identical across all the "supported" browsers, but might be an interesting comparison point.


Yep, CSS wasn't really supported anywhere until 1997. Most places were still using font tags and table-based layouts for quite a while after.


And in the 90s CSS was unreliable and often difficult to use. Internet Explorer made it difficult to go full CSS for a long time.


I still think CSS is difficult to use. Out of HTML/CSS/JS, I despise CSS the most.


It's what drove me out of wanting to work in Web authoring. I looked at CSS and literally decided "I am not going to learn this."


It has gotten a bit better. The early CSS era (floats, etc) was worse than tables, I thought.


Early CSS had a huge problem with this.

"Don't use tables, use CSS!" was a big message. But CSS's tools for tabular layout were extremely poor and difficult to use, leading to much frustration. It was a joke how hard it was to create a simple responsive three column layout in CSS, a thing easily accomplished with tables and very common on the web. Getting that three column layout right seemed like black magic in CSS1.


> "Don't use tables, use CSS!" was a big message.

It was, but in hindsight maybe the message needed to be elaborated more. Perhaps it should've been "Don't use tables for layout, use CSS!"

Besides that, CSS IMO accelerated more complex and visually pleasing websites, and arguably spurned on the Web 2.0 look. Unfortunately, due to the message not specifying the for layout bit, it took a while for many devs to unlearn that tables are bad – tables aren't, they should just be used for tabular data.


But this was exactly the problem. They said "don't use tables", but then didn't have anything that could do what you could do easily with tables. Instead you got many hours of arcane and confusing combinations of float: and clear: tags. Especially if you were trying to avoid absolute positioning and wanted the page to be responsive.


Yep, it was absurd. I remember spending hours fighting with CSS to do something that would've taken 5 minutes with tables. Maybe I was just bad at it.


I feel quite the opposite now, a table feels like so much typing compared to a quick flexbox layout. Even when I legitimately need a table, I dread it.


If Flexbox had been in CSS from the start I think people would have a far more favorable view of CSS.


Also http since https was not supported at the time. What a way to commit into an idea!


It's committed to the idea so that it actually works on period machines. See /r/retrobattlestations announce thread: https://old.reddit.com/r/retrobattlestations/comments/meqlql...

It's surprisingly tractable to plug 90s machines into the internet via ethernet adapters or little serial gadgets that can do SLIP or pretend to be hayes modems, but the modern web full of crypto and execution environments that can bring modern computers to their knees is not kind to them.


Neat project. I use NetNewsWire for rss. It’s interesting that the format people prefer to consume news in versus the format that news is delivered in are now so different.

Newspapers went through the same thing. The older papers are all stories and were funded through the price of the paper, then ads invaded the margins.


> The older papers are all stories and were funded through the price of the paper, then ads invaded the margins.

I assure you 19th Century newspapers had advertisements.

For example:

https://chroniclingamerica.loc.gov/lccn/sn86072192/1897-08-0...

And an 18th Century newspaper with advertisements:

https://chroniclingamerica.loc.gov/lccn/sn83021183/1777-03-0...

How far back do you go to find newspapers with no advertisements?


You must now explain what you used to get such eye blinking fast speed. Surely it's a new technology.


Based on the network request, it's a single 24.88KB HTML file hosted on nginx and a cache HIT status and current age of 725 seconds. There's no CSS, JS etc. The HTML file is pure html from what I can tell with no inline CSS/JS.

I achieved something similar for my site by putting the HTML file generated every few minutes on a single S3 bucket and putting CloudFlare in front with caching page rule on the HTML too (by default HTML is not cached by CF).


I did not know that google news is not available in Spain https://support.google.com/news/publisher-center/answer/9609...


Looks/works nice in lynx (text browser) as well, where google news itself does not. Bookmarked.


Works nicely in links2 as well...

https://en.wikipedia.org/wiki/Links_(web_browser)


This is perfect! I have news.google.com blocked on my PC because I'm trying to block a bad habit of idly typing it in and getting sucked into the void. Came to HN and still found a way to get the news drip without as much distraction. :)


This is the website we need for that 32" e-ink newspaper we're all planning


Needs a very light gray background.

To be devil's advocate, I feel like those serif fonts were easier to read on a low resolution monitor because they were sharper due to the pixels being very apparent.

Here not even on a 4K, I find it difficult to read the headlines.


This is great! And retro, your channel seems cool. I think I have a 3com Audrey Ergo in the closet if my wife didn't toss it. It you're interested I'll send it to you.


It would be great to get this working on each of our personalized Google News feed. But I suspect it would require either a userscript stylesheet or Google sign-in (if that even gives you a personalized feed).


Subjective, but I wonder if browsers had defaulted to sans serif font instead of serif, people wouldn't complain about "how bad this webpage looks". I get the document origins of course.


This is wonderful! As pages get more bloated and new crypto is used for https my old computers lose access to more and more of the web, bookmarking this to browse from mac os 9 later on today.


Now, here's a web page that NetSurf has no trouble with!


I like that it's much easier to not click on these.

Maybe it's just because the colour scheme somehow makes it less interesting.


Remember doing a news site back in 1994. This is exactly like it looked. Much before Netscape and MCC though ;)


Any chance of providing an RSS feed? The plaintext nature of the actual articles is delightful.


This is the perfect way to deliver factual news. To the point, no fluff. No wasted time.


This looks almost like an RSS feed.


I was just the other day wishing for a no-nonsense headlines site like this. Thank you!


There are a few html mistakes - stray tags, etc. View source in Firefox colors them red.


Super cool. SE/30's are amazing machines. Does HN render on netscape <= V4?



Wish the width was a little narrower, or fluid. Reading on a phone is kind of small.


God I love this. Pops up instantly, content is the only thing on the page.


Is meta tag outside of html intentional? HTML 2.0 was way before my webdev days.


I'm going to hazard a guess, that because ancient browsers sometimes displayed html tags they didn't understand, the author has deployed a hack to ensure the correct character encoding is used on a new browsers without soiling the rendering on older systems.

Also, I was hoping that 68k.news supported HTTP 1.0, but it doesn't, it's a virtual host on the IP, so needs the host: header variable set, which is HTTP 1.1 - that's a bit of a shame as it means the original browsers such as Mosaic can't access it.


I prefer this to the full-fat version anyway, even on modern CPUs. Bookmarked


This is great. Google News has been unavailable in my area for years.


It's so Netscape 1.1 it doesn't even use HTTPS. Love it.


Wow. Load times are incredible and no stupid node.js present


Love this. Loads much much faster than Google PWA page.


I like this so much more!


Not having HTTPS certificate is a nice touch :-)


looks good in lynx


eeew


I don't know if you can fix google news after google broke it themselves back in 2011/2012, but this comes close.


Considering the content, maybe it should be renamed "Google Trivia". It's like a bad version of Reddit with no comments for context/addl information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: