Hacker News new | past | comments | ask | show | jobs | submit login
How and why I attempt to use Links as main browser (dataswamp.org)
289 points by lich-tex on July 21, 2020 | hide | past | favorite | 194 comments

I'm a computer programmer, lisper, emacser, vier, etc and I desire every program that I use daily to be compiled from source so that I can dig in whenever I need to fix bugs, add features, or just curious how something is done.

I also highly desire the ability to change the keybindings of any program I use to be what I want, generally following the VI model.

I also do e-waste collection and pride myself on being as fast or faster than others using 10+ year old computers (writing this on my main laptop - a Thinkpad T60).

So with that lead-in, a few months ago I switched to this laptop and had a glitch getting X to start so I decided to push the envelope as far as I can running on the framebuffer, hence "links -driver fb".

The web has gotten slower and slower over the years and while there are some new kids on the block such as the next browser [1] that should give me what I need on X, links has been a win over and over, so far.

No code to share yet, but I finally got (for %95) of website, the browser of my dreams.

Lightning fast. It will fetch and render almost any page in less than a second, but one thing it was missing was some customability and expandability, hence the natural move to embed guile. So I now have a lisp that is my browser and I am in the process of exploring what that means. Full keyboard control, for everything. VI bindings. A cache from heaven that remembers everywhere I've been and never reloads unless I tell it. I can fly around history like you've never seen.

Anyhoe, happy hacking!

[1] https://nyxt.atlas.engineer

Please, _please_ share that code.

The democratic web is dead, and navigating the silo'd web is dreadful without one of the three blessed browsers. It sounds like you've created a ray of hope.

> A cache from heaven that remembers everywhere I've been and never reloads unless I tell it. I can fly around history like you've never seen.

This would be a killer feature for me, especially if the cache was fully searchable.

Back in 2001 there was a startup called IonKey which used early Lucene to search over all desktop documents - emails, Word docs, PDFs, browser cache... Didn't survive harsh post-dotcom crash.

I just installed https://www.lesbonscomptes.com/recoll/ today and it's exactly that. Super extensible and super fast once you built an index.

I only had to add a global file exclusion rule .* which prevents the indexer from scanning dotfiles. (The 'interesting' dotfiles are in a repo anyway)

> No code to share yet, but I finally got (for %95) of website, the browser of my dreams.

> Lightning fast. It will fetch and render almost any page in less than a second, but one thing it was missing was some customability and expandability, hence the natural move to embed guile. So I now have a lisp that is my browser and I am in the process of exploring what that means. Full keyboard control, for everything. VI bindings. A cache from heaven that remembers everywhere I've been and never reloads unless I tell it. I can fly around history like you've never seen.

Why do you tease us?

More seriously, I'll be interested when/if you do have code to share - this sounds great.

Off topic here but this line:

> I desire every program that I use daily to be compiled from source so that I can dig in whenever I need to fix bugs, add features

For those who alter software they didn't write, how do you maintain those changes? I assume pull requests for bug fixes to the maintainer's repo, but what about mods? Do you run a diff during every update?

Links is relatively small and does not change that often. I just save patches I make for different versions.

I am not really keen on software that is 1. large and 2. changing frequently. For example, there are no "updates" when using links. I read the links changelog and do a diff before I decide to upgrade to the latest version. Sometimes I will continue to use older versions. Depends on the changes.

In general, if the software is relatively small and more or less "finished", then, for me, it is more amenable to making changes. When I look at the software I use, it appears I consciously try to choose software that meet those criteria; the most favoured programs all fit that description.

Patches. This is stable enough for any software stable enough to be used daily.

Nyxt actually looks very nice. Very nice, indeed.

I really like the navigational features, because they seem to be both memorable and predictable based on the layout that you're seeing (whereas others like vimb are totally unpredictable in their behaviour).

When it comes to making the web usable again, with less cache busting being enforced on the client's browser, I guess we share similar goals.

I'm currently trying to solve it via peer-to-peer services and the idea of trusted peers - that also function as caches in the local network and e.g. share download streams with each other once a huge file is being downloaded by a trusted peer. [1]

Personally, though I've been using VIM for as long as I can code, I would not agree that we need another browser that's keyboard-navigated.

For me, the understanding of websites and the semantic automation aspect is far more important so my UI/UX ideas are different. I'm trying to automate by recording user interactions rather than automate by keyboard shortcuts. The underlying idea is that any non-programming user can automate their own actions on the web.

[1] https://github.com/cookiengineer/stealth (currently a PWA with a local node.js process that does all the heavy work)

I weirdly love staying in console in framebuffer. So much more pure and direct than running X and then running a terminal app inside X. I always found the framebuffer resolution pleasing to my eyes.

There's apparently an old version of firefox that was able to run in framebuffer at one point.

The links browser with graphics option apparently is a more current and robust option of getting a browser in framebuffer.

Super interested in hearing more about what you're using for a framebuffer browser!

Some minimal/framebuffer/cli browser links from my bookmarks:

- https://www.brow.sh is a cli (not framebuffer) browser based on a Firefox plugin

- https://www.uzbl.org is a set of python scripts implementing a browser

- https://github.com/balenalabs/balena-wpe is WPEWebkit with framebuffer output

- https://github.com/TOLDOTECHNIK/buildroot-webkit is another WPEWebkit, but I'm not sure if it's got framebuffer support or not

linux console user here too :-).

w3m works really great with framebuffer. imho it renders webpages in a cleanset way, compared to all other text browsers. plus it has default vim-like keybindings.

I’m several MacBooks deep at this point but my last non-Mac was a Thinkpad T61. I had a T60 before that. I’m not sure what was better, my T60 or my 2014 RMBP. Not sure what I will buy next.

I hear good things about the X1 Carbon. They seem to be plentiful.

This is impressive but damn it makes me feel stupid. I never felt the need for any of these features, and I'm completely happy with almost vanilla firefox with tree style tabs. I feel like I'm missing something.

It's about limitations breeding necessity. I continually limit my computer capabilities primiarily to enable innovation. So imagine yourself required to use an older computer on a spotty internet connection (off for days), and you'll see the value in some of these things. That vanilla firefox doesn't seem so snappy any longer and is almost completely useless without a direct and fast internet connection. Links will start to look like a game-changer and once you get a searchable offline cache of all your web explorations you'll never want to go back to the old world of completely dependent on others for your daily bread.

A quick trip down the rabbit hole (and my package manager) after reading this article revealed that the author of links has also produced links2 and xlinks2, which support very limited formatting (like showing text in a few different sizes). It also has image support which can be toggled via a hotkey. I found the browsing experience with xlinks2 to be more pleasant than with regular links, and like with links, every page loads almost instantly.

Never mind the scripting and frugality, I'm quite interested in the cache setup. Do you get any notifications that the current state isn't what's displayed, similar to what you get in text editor buffers?

The web browser was originally was hyperlinked documents glued together with URLs. The browser has turned into a platform for apps that use a document paradigm for the UI... complete with built in database, graphics, and deep os integration. It's been huge leap in reducing cost to build, distribute, learn and use apps. The original use case of just documents is probably approaching being just and edge case.

I don't think so, I see most people reading documents. Now those documents might have lots of spyware, adware, scroll highjacks etc. embedded in them using JS, but at heart they are documents. For example a Facebook, Twitter, Instagram etc, page is more like a doc than an app for the most part. The internet is people consuming content. Even web "apps" served using Rails can happily work like this for the most part, with the occasional form post, for example Hacker News, Banking, Email. I am finding old school web apps much nicer to use honestly. They run work fast on modern connections and hardware.

FWIW, Facebook works great in Links. https://m.facebook.com

There's also a lite version of Facebook for those who want Messenger without an app.


Also https://www.messenger.com/ for Facebook Messenger without the app or the Facebook.

If you have 'mobile' or 'tablet' in your useragent this URL only offers a link to download the app.

However, for desktop it's great - I use it (and web.whatsapp.com) with Chromes --app= parameter to emulate the app experience.

I don't know, I'm still vaguely optimistic that people will realise that breaking the HTML/HTTP document model is causing more issues than it solves. Although we are going to need to address the issue of rendering dynamic content, since as of now HTML allows no real separation between content and representation. This frequently means that even if the content can be read from an HTTP endpoint then it still can't make it into the page without large blobs of JavaScript, just because the way the data is represented is different from the way it's read.

> I don't know, I'm still vaguely optimistic that people will realise that breaking the HTML/HTTP document model is causing more issues than it solves

What issues does it cause?

You can still view standards-compliant circa-1993 html files in any modern browser, without any issues.

What makes you so sure it is causing more issues than it solves? We can do a LOT of things in the browser that we can't do with the pure html document model.

More isn't necessarily better. I'd rather have a simpler and more restrictive model with fewer performance, accessibility, maintainability, and security issues.

This is a feature request relegated to HN. No one else wants this.

We have marketing to thank for that. Consumers don't care about performance because they're trained to buy $800 devices every year or two. They also don't care about security because being hacked is seen both inevitable and no big deal. And why should they care about accessibility? Most people don't need that at all.

It's not that people don't care, it's that in their eyes these are reasonable trade-offs for the rich capabilities offered by things like google maps, youtube, gmail etc.

Im all for getting rid of some of the cruft that has developed over the years, but let's not forget the literally world-changing utility that is enabled by the modern web.

Well, the fact that people are gradually moving towards REST APIs (sometimes even on the public facing side) at least suggest that some people clearly think that breaking the HTTP model was a bad idea. And I'm seeing increasingly many people that get annoyed with the bloated HTML pages that don't work without huge amounts of javascript and which break normal things like history or sometimes even scrolling.

Unlike the post I was replying to I'm not so much complaining about webpages that mimic (or are) standalone applications, I'm mostly complaining about webpages that have effectively built their own browser just to display some HTML documents.

Sure some stuff can't be done with a pure HTML document model (at least not without some kind of black-box html element), but plenty of stuff is just displaying some representation of content and (optionally) allowing people to manipulate it which really shouldn't require anywhere near the amount of effort people sometimes throw at it.

> It's been huge leap in reducing cost to build, distribute, learn and use apps.

Citation needed.

20 years ago you could make a commercial app with a GUI in no time with things like Delphi or Visual Basic.

Now to do the same we have to learn several standards and languages and libs/frameworks and build tools and server admin tools and...

I think the key differentiator is in distribution. If you build a web app it's immediately available to a sizable proportion of the human population (not that they'll necessarily use it). Getting similar reach with something offline is much harder.

As you say it's still not trivial (legacy browsers, funny standards, technical challenges of serving loads of users etc.) but Id argue it is substantially easier than the challenges involved in making a program written in visual basic available to billions of people.

People back then downloaded the program and ran it. Online distribution, web browsing, etc. was already a very common thing 20 years ago, specially in certain countries and businesses.

And publishers were readily available too, like for any other physical item.

If something was wanted, it did not matter, it would sell and distribute itself. Take a look at things like Lotus, Office, Windows, Doom...

I strongly prefer the native, compiled software of the old days, and dislike webapps. There's only a handful of webapps that I find useful and usable, while the majority feels unresponsive, resource hungry, and generally a step back from late-90s software.

At the same time, I absolutely think it's the online distribution of webapps that is the killer feature. People only need a URL. There's no download and install process (and the less tech-literate users often had difficulties with that), and perhaps even more importantly, there's no patching. As a developer, you know that all your users are running the latest version you got deployed. You don't have version 1.0, 1.01 and 1.12 floating around the Web as downloads, you don't have to beg people to download and install the patches, you just deploy to the production server and that's an automatic update for everyone.

That is IMO why browsers won as an application platform. In my very limited experience coding for the web, it's a remarkably bad application platform. You don't have many useful primitives. You don't really have a choice of language. You're beating protocols meant for documents into submission so they become useful for apps. It's more difficult to create a UI than it was in Visual Basic, Delphi or Borland C++, and so on. But for all of that, you get an application that someone can start using after clicking one link, and that will transparently auto-update.

With Stores those points are solved. Prominent ones like Valve, Apple, Google, Microsoft etc offer automatic updates, URL discovery, one-click install and stuff like that. They even have extra benefits, like reviews from actual users vs marketing landing pages of web software.

Browsers have "won" for other reasons: licensing control, unavoidable analytics, subscription mechanics... All that combined is a dream for a publisher.

I think you're getting ahead of your point, here.

> With Stores those points are solved. Prominent ones like Valve, Apple, Google, Microsoft etc...

didn't exist 20 years ago...

> Browsers have "won" for other reasons: licensing control, unavoidable analytics, subscription mechanics...

And those stores (and the native app platforms they reinforce) is what allows all these things in native apps, and makes them quite a bit harder to block than in web apps.

> didn't exist 20 years ago...

So what? I haven't said anything about that.

As a data point, Steam (as a store) was already there 15 years ago.

> And those stores (..) is what allows all these things in native apps

Stores don't change what a native application can or cannot do.

>> didn't exist 20 years ago... > So what? I haven't said anything about that.

If the old native apps were the gold standard, modern distribution wasn't part of that. If we include modern distribution, then all the supposed advantages you cite for browsers "winning" apply just as well to native apps. And yet.

> Stores don't change what a native application can or cannot do.

and yet all those stores provide licensing control (and DRM), analytics, and subscription mechanics, often unavoidably (Apple) or unavoidably in practice (Google and until very recently, Steam).

I mean, websites (and apps) are cross platform. That fact alone reduces a ton of development time and mental overhead.

You could do cross platform UI with Java. It was slow, ugly and poorly integrated with the rest of the OS unless you put in extra effort for each platform.. Just like applications running in the browser today.

I struggle to conceive an alternative. I don't want to download a native app every time I want to access my bank account or browse Twitter, for example. Web applications, despite their glaringly obvious flaws, are undeniably convenient.

I disagree it could ever be just an edge case because the use case hasn't disappeared and will likely never disappear. Blog posts come to mind as an obvious example, but also encyclopedias, research... Anything that simply needs to transfer nearly pure information instead of providing interaction.

Of course the app use case has grown tremendously and is probably the more important one for casual users which are growing in number in relative terms, so I understand where the sentiment comes from.

One could say that even in many of those cases the original concept of document does not really apply. Blogs have comment section, wikipedia allows you to modify the document itself, and so on.

Not to say that these are less of a document, just to say that most usecases are still moving toward using the web as an application delivery platform, even if that application purpose is to show documents.

Yet many of these use cases can be delivered to the end user as a document. Consider Hacker News. I am able to browse the site and submit comments in web browsers that do not support JS. (This comment is being submitted by one such web browser.)

This approach does not work for everything, nor does it work for everyone. Someone creating a blog post is probably going to prefer the use of JS to make their browser work more like a word processor. It can be done without JS, but that typically involves some form of markup that only a handful will enjoy. You can offer a map service without making it behave like an app, but most people will find a more interactive user interface more efficient. You certainly aren't going to be able to create emulators of vintage computers without JS.

That being said, we can do an awful lot in a useful way while serving up static documents.

My separation of document/non-document was not about javascript, it was about the type of functionality the site offers.

I would call a site like https://danluu.com/ document focused, while hacker news is focused on being a platform for conversation and a ranked news feed.

If you were to download danluu.com and brouwse it locally you wouldn't miss much, while if you where to do the same for hacker news you would only get one part of the offered experience.

Javascript enables this difference to grow to much greater extents but it is not necessary to me.

I don't consider having the ability to edit a document collaboratively contradictory to the document use case. Sure, the editing plumbing is an app, but the core of it is still a document. Contrast this to something like a game or a web shop, which gravitate much more distinctly towards the app use case. Someone else mentioned how the web was envisioned to be editable from the start.

Thinking about this - at the start, the web was intended to be editable as well browseable. the very first browser, WorldWideWeb, was also a web editor.

Do you have an example of how a web app has produced a product competitive in quality and functionality with a native offering? Office 360 is the only example I can think of. Everything else is niche SaaS (or ad-paid) without clearly established desktop patterns to compete with in the first place.

Meanwhile, it’s an absolute joy to use as a document browser, especially without JavaScript to mess up basic expected native behavior. I don’t get why there isn’t an “immediate mode” for application-style websites, tbh, that bypasses the DOM to provide more scalable widget layout, rendering, and interaction. I’ve written about a half-dozen “scrollable infinite list” implementations and I’ve never lost more sleep than doing this with the DOM. Probably a major reason why software is slower than ever. Hell, I just filed a bug with Patreon today complaining about how their messaging feature is so slow that, in combination with a double post bug, I ended up harassing a creator with four copies of the same message.

I’d buy this thesis that the browser-as-an-app-platform is a win if people could develop their software with a comparable degree of quality, responsiveness, and resource usage. Java managed this better like 20 years ago even if it did look and behave like ass by default. Instead you now need JavaScript to read basic newspaper articles, completely obscuring the internet‘s content to everyone but google crawlers and WebKit/gecko users.

What is deep OS integration in browsers?

First few things that come to mind:

  - Access to Camera & Microphone
  - Access to USB devices
  - Push Notifications
  - Geolocation
  - Bluetooth
"deep" is probably the subjective term here.

Printer dialogs in browser [_we hates it_].

Access to raw USB devices is a good example I suppose.

Oh sweet. What could possibly go wrong?

Not much, really. Unless the device has been created to work with WebUSB, you first need to manually add a udev rule (on Linux) or replace the driver with WinUSB (on Windows) before you can access it through a browser. Even then, the user needs to select the device from a list, like you would when giving a site access to your camera.

Right, why would we want applications to be able to use hardware anyway?

Does Chrome OS qualify?

A little offtopic, but interesting nontheless

Surfraw (Shell Users Revolutionary Front Rage Against the Web) is a free public domain POSIX-compliant (i.e. meant for Linux, FreeBSD etc.) command-line shell program for interfacing with a number of web-based search engines. It was created in July 2000 by Julian Assange


Your link is a fascinating bit of history and perspective.

Thanks for including this here.

No mention of Browsh yet. It is a neat hack bringing the capabilities of modern browsers to command line browsers. https://www.brow.sh/docs/introduction/

> Browsh is a purely text-based browser that can run in most TTY terminal environments and in any browser. The terminal client is currently more advanced than the browser client.

> The browser client, somewhat confusingly, renders simple HTML or plain text that itself was parsed by Browsh running inside another browser. The point being that the HTML or text that Browsh outputs is extremely lightweight. As of writing in 2018, the average website requires downloading around 3MB and making over 100 individual HTTP requests. Browsh will turn this into around 15kb and 2 HTTP requests - 1 for the HTML/text and the other for the favicon.

I checked it out. It is amazing. However am I correct that it's still mostly driven by mouse input? I couldn't find any other keybindings to navigate a website.

Yes. It's a Firefox plugin.

+1 - Browsh is one of the most amazing pieces of software I've come across in the past few years.

I've found that using Browsh together with mosh and tmux, you can get a surprisingly functional remote desktop experience. I've found it especially handy in cases when "normal" remote desktop is too slow, e.g. when tethered to mobile data connection or using an underpowered client device.

Someone else achieved a similar effect with readability-cli, which uses Firefox's Reader Mode library to pull the content text out of an article and output it into your terminal:


A different but related approach is to use Safari’s Reader by default on all sites. You can then disable Reader site by site if needed.

I remember when Arc90 first unveiled Readability (now "Reader Mode" in several browsers), folks were ostensibly right on board. One Mozillian praised it enthusiastically on her blog. I pointed out that the problem Readability solved was directly attributable to the lack of empathy by web site operators for their visitors—choosing instead to prioritize the operator's "expression" over the visitors' needs and best interests. It should go without saying that the template for her own blog hardly let it stand as an example of a minimalist jewel of legibility.

I'm also surprised at how poorly Reader Mode fares with "pages" served as text/plain. Is there even a case to be made against text/plain documents being shown with Reader Mode enabled by default if the heuristics can give it a high enough confidence score (with an opt-out escape hatch back to tiny monospace black-on-white for whomever wants it)? Eventually, we could do the same for very simple HTML pages like those found on cr.yp.to or danluu.com. I'd wager we could eliminate a huge part of the "Website Obesity Crisis" if unstyled pages were attractive by default.

True plain text -- as opposed to something like Markdown or another "plain text markup" format -- is hard for a reader mode, because you're going to have to put some effort into deducing the structure of whatever document you're looking at and that could be highly idiosyncratic from author to author. I don't think that's necessarily a reason not to do it, of course, but I can imagine it's why it's not high priority -- reader mode is already having to deal with all the ways modern web sites have found to make HTML highly idiosyncratic.

Does it use JavaScript etc?

> CSS First, I want to focus on ‘destruction’ of CSS. As Links does not support modern CSS it renders most of the internet as-is, and will only contain images (on which I will write later). CSS causes the internet to become a baroque set of arbitrary design decisions, and does not contribute positively to the general experience. Links (after 2.19) allows me to pick my own font, my own background/foreground/url color. Thus, I have a uniform experience. In that I already visibly save time/energy/brain processing power, etc.

This is kind of like the philosophy behind Soylent, but for browsing the web, rather than sustenance.

And most people aren't keen on eating Soylent for every meal...

The 95% as good version of this, for most people, is just to disable JavaScript, then set the same colors & font for every website.

Links is unfortunately not enough for modern web. I have installed links , dillo browser , w3m , netsurf in my device and i ocasionally use them but modern web is moving away from these browsers.

When I was doing web dev I'd always test with Links as one of my accessibility checks. Presumably that's common?

I can't tell if you're serious (about it being common - not that you use it), but I've never heard of anyone doing it and I'd bet less than 0.1% of web devs do this.

And as a good friend of a blind person, this is a tragic shame.

Does your friend use Links (or lynx/others)? I thought most visually impaired people use a normal browser combined with a screen reader (NVDA or Voiceover).

Of course, designing with accessibility in mind is important, but I would argue supporting Links isn't.

But if your site supports Links then it’s almost definitely going to work with screen readers and vice versa.

That’s not at all true. Links doesn’t care if a text field is properly labeled and a screen reader user with a modern browser can happily use a site built using a front-end framework if the developer has taken proper steps (e.g. focus management on “page” transitions, used ARIA live regions to announce dynamic changes).

I did it.

Many sites are not even tested on firefox

Many sites are seemingly not tested at all. I recently tried to open a concert site and couldn't get any details on a concert, because their PDF viewer was buggy; I tried on both Firefox and chrome using three different computers, as well as my phone. I wish you could just use the browser's built-in PDF viewer in an iframe (or better yet, don't use PDFs in the browser)

You can just embed the browsers pdf viewer... It doesn't work on mobile browsers generally, but with some hacking you can get it to fall back to a download link.

I get the feeling on some sites that the end goal was to have a site, not necessarily a working one.

It was always hard to gauge for me how much other professionals do, presumably all professionals tests on Chrome, Edge, IE, Safari, Firefox as a minimum? It's so easy nowadays with web services to say least check first view?

Most apps are interactive now and are hard to test on multiple platforms (ie it takes time). Even youtube has some performance issues on different browsers.

YouTube is made by a browser maker, ie. it has a motive to make non-Chrome browsers look worse. Eg. when chromium-based Edge came out, YT served it an old version of itself for no apparent reason.

Within the past week, I wanted to check out how YouTube handles graceful degradation when browsed with something like NetSurf. Spoiler alert: it just doesn't. It wasn't that long ago that such a cavalier attitude would have been unacceptable, if not unthinkable. See https://blog.chriszacharias.com/a-conspiracy-to-kill-ie6.

I can't understand this.

I use firefox as my daily driver, but I still test in chrome and links.

I also need to test on IE (since our biggest client uses it) so I have to spin up a Windows 10 VM and test it that way. Absolutely worth the time, time spent increases exponentially if a bug is found later in the process.

> I also need to test on IE

How unpleasant. Have you found that the W10 VM takes way more disk space then seems reasonable? I had to fire one up the other day, and between the VM image and the VM it took 50ish gigs of space iirc.

I check some sites I build in `links2` for accessibility from both a latency and capability perspective. It's rather interesting to see things from a blind person's perspective, though for many companies I'd imagine the cost not worth the revenue.

It'd be interesting to do a market research study on what the impaired population for a given vertical is, and put some data-driven arguments behind adding support for accessibility.

Honestly, doing that would probably improve the UX for people with normal eyesight as well, because it'd force you to think about document layout and component interaction carefully.

i think of it as almost a feature, because quality of website correlates with quality of content.

Yeah but what does the "modern web" really get you anyway? I use the web to get information, you don't need most of the features of the "modern web" to get that same information.

In a lot of cases probably: a job

Moving away like a meteor escaping the solar system in a hyperbolic trajectory.

I use eww in emacs for a lot of browsing. It copes extremely well with the modern web.

Same here. If Emacs is your operating system, then it's such a natural extension. And if a site doesn't work in Eww, then I consider it shoddy craftsmanship and refuse to open it in Firefox - so it's an excellent tool to stop procrastination! Pity HN works fine.

That mentality is so funny to me. I felt the same regret with Steam Proton suddenly allowing me to procrastinate with games I hadn't been able to play before.

Well then! I just fired up eww for the very first time, and I'm apparently having a little bit of trouble with posting this comment. The text field seems to be struggling a little bit, but otherwise I'm quite impressed with how well it seems to handle HN.

I find eww to be slow on my netbook (20 or 30 sec to return ddg.gg search results vs. 1 second for links).

Compile your own emacs for your particular processor using all the available optimizations and you might be running much faster than your OS's emacs. You can also opt for other speedups and enhancements which you won't find in the normal generic emacs.

It's even possible that if you're on an old netbook, your emacs is using optimizations which don't exist on your machine causing all kinds of extra processing. Who knows. I also don't use GTK at all, this is a massive speed-up.

I see people have proposed either Safari's or Firefox' Reader modes as a stop-gap.

I know at some point Firefox offered an ability to define custom CSS to use for all the pages, but I guess that's hidden underneath some about:config options today — I can only see the option to disallow use of custom fonts by a web page. I would like to see someone implement a bare-bones CSS for the modern web that's easy to customize using these browser features.

It seems what I am thinking about is userContent.css: http://kb.mozillazine.org/index.php?title=UserContent.css&pr...

Yes, being able to specify a user stylesheet was common in early browsers. Now there are extensions to provide a GUI and to manage different styles for different domains.

It was always somewhat difficult to maintain custom styles beyond things browsers still let you set like fonts but it’s harder now because it’s so common for developers to not use appropriate semantic HTML.

It's funny you call that "early browsers": from the perspective of today, you are absolutely right! I just remember CSS getting introduced (was it late 90s?) and think of things like Netscape 4.0 as "middle age browsers" (with 3.0 ending the early age — I am sure others would call that a middle age browser too, but I did not get to experience Mosaic and such), but I think I need to recalibrate my sense of history here :)

I wish someone would write the "missing preferences panel" for Firefox.

It'd be a nightmare to maintain, as Firefox devs inexplicably and randomly rename config names, sometimes modify their purpose, or just plain remove them.

Safari still has this in its preferences, surprisingly.

I really wish there was a text only browser that would render the web similarly to Firefox reading mode. All the lynx, links, elinks are not very user friendly and a bit ugly alas. I hear some of them have a Vim mode for navigation but I did not manage to use it reliably either.

Yes, they are ugly because they don't understand the modern web.

Firefox reading mode is pretty because it understands the modern web, and is making opinionated choices about what to display to you.

They are different design goals.

It's not read-mode-level, but I've always liked Dillo (https://www.dillo.org/) for its sheer speed with acceptable design. It's only rendering HTML with CSS, and that is enough for a vast majority of cases.

You may also want to try out NetSurf, it is another lightweight browser but i think it has better support for HTML+CSS than Dillo.

I really like Dillo, it's my main browser on an old computer I still use. It feels very fast. However, it has some issues with SSL. Can you click trough duckduckgo results, for instance?

I also miss a slightly more useable interface for bookmarks. And touch compatibility (to use it on my PinePhone).

I really like the js-free experience. I've also used elinks in the past, zimbra works quite well with it, and pressing F4 to edit my emails in vim is a breeze :)

Compile dillo from mercurial (hg clone hg.dillo.org/dillo dillo, cd dillo, ./configure --enable-ipv6 --enable-ssl, make, sudo make install) and then edit ~/.dillo/dillorc so it looks like:

        .domain.foo ACCEPT
        .domain.foo ACCEPT_SESSION

A dozen or so years ago, w3m used to be cream of the crop (as far as JS support went, at least). What happened to it?

It's still doing good: https://github.com/tats/w3m

I am a very happy using w3m every day. Just occasionally there is a need to use a graphical browser.

There is no JS support though. Don't think there ever was.

w3m supported tables and frames, but I'm pretty sure it never supported JS out of the box. There was an experimental w3m-js extension, but I don't think it ever saw much development: the most recent snapshot of that page [1] is from ~2010, and it links to patches from 2003.

Elinks supposedly had it, but you have to compile it in yourself and I'm not sure how it fares these days. Edbrowse has decent support for it, but its rendering is not meant for sighted people and rudimentary (e.g., no color, it doesn't bother with aligning table cells).

[1] https://web.archive.org/web/20100504081232/http://abe.nwr.jp...

Yeah I remember w3m being really good. Looking at the sourceforge page, it seems like development stopped in 2012.

Me too. I thought it could render all the elements, since it knows their size and layout, but just not retrieve the actual data inside unless clicked or enabled. A few rules would probably suffice to keep text content and necessary interactive items visible.

I was puzzled throughout why "a uniform experience" is something desirable. Totally puzzled.

It reminded me of what McDonalds "restaurants" try to offer. I put "restaurants" in quotes because noone thinks of them as proper restaurants. Something about the uniform experience maybe? I guess before that every site had its own unique menu and style, that took much longer to serve..

Does the author also prefer talking to people who wear face-masks? Do they shun syntax highlighting? Why take all the fun out of life? Why live like a Unix tool, taking in a plain text stream?

> I was puzzled throughout why "a uniform experience" is something desirable.

Because using a very consistent UI (e.g. everything on terminals) takes less cognitive load.

It is known that having to continuously switch your vision between different fonts, font size, colors and other visual patters across different applications is more tiring.

It's one of the reasons for having an extremely consistent style on aircraft dashboards and similar.

I've noticed the difference myself many times when spending a long day on a bunch of uncluttered terminals VS a heterogeneous mix.

Furthermore, using a mouse requires a continuous feedback loop between hand and eye to aim at buttons. You don't need that on a terminal.

When doing "change management" ops in Amazon the first step was always to unclutter the desktop.

I don't go to most websites to be amazed by their looks and usability. If one is better in these regards, then I'd prefer every website to enjoy these improvements.

A proper restaurant analogy would be if each and every restaurant reinvented the way to put food into your mouth. Sure, it might be funny once in a while, but currently, when pushing the front door, you have no idea where they'll put your fork, if you will have a fork, if you'll be fed modern times-style, https://xkcd.com/1293/ style, if you'll have to inhale your soup trough the nose or hunt for your food.

You might see it as "the fun of life" if that was common practice. But a standard interface (UX) allows one to focus on the important stuff (namely, enjoying the food: most Asian restaurants I know offer forks as well as chopsticks). Important stuff here would be the piece of info you came for. Be it an article, a picture, some data, etc.

For instance, do you enjoy the "creativity" with which websites design cookie banners instead of having a standard form, or better, obeying the DNT bit?

Having an uniform experience that you can customize is very nice. It matches your expectations directly which is core for design. No weird scrolling behavior that you didn't expect, laggy ajax webpage loading, ctrl+f highjacking.

I'd rather something be more usable than arbitrarily "fun." You don't go around making zigzag roads, circle sliding doors, etc., which are arguably more "fun."

Not the author, but...

> Does the author also prefer talking to people who wear face-masks?

Right now, yes, yes I do.

> Do they shun syntax highlighting?

Syntax highlighting performs an actual service, as opposed to being cruft.

> Why take all the fun out of life?

This isn't taking the fun out of life, it's making a tool more useful.

> Why live like a Unix tool, taking in a plain text stream?

See above.

>> Does the author also prefer talking to people who wear face-masks?

> Right now, yes, yes I do.

Sorry, it seems I didn't make my point clear enough. Also not sure you're not joking. I just meant that if "a uniform experience" is good, having everyone wear a mask (at any moment in history, nothing to do with virus) will make talking to people more uniform and thus better.

> Also not sure you're not joking.

I will admit that line was a bit tongue in cheek.

It's not that a "uniform experience" is better in every instance. It's more that a tool whose purpose is to convey information is more efficient when you don't have to deal with formatting that may or may not interfere with comprehension.

I'm also puzzled by this and I don't understand the argument of "cognitive load". Our brains are extremely adaptive and powerful, I don't understand how some different text formatting or background colors can be difficult to parse.

If all you do on the web is read news articles, then having an uniform experience is kind of desirable, but other than that, you're doing more harm to the UX by ditching modern browser support. How is draw.io rendering for you? How does the Bitbucket/GitHub/GitLab diff look for you? How's the YouTube watching going?

I guess those are different groups, but remember when everyone was complaining about all the sites using Bootstrap because they all looked too similar?

I agree in that I like how sites use different looks as part of their overall aesthetic. For people who don't like that, I'm not sure what's so compelling about using a perpetually probably-always-somewhat-broken tool that requires complex installation and maintenance rather than just use Firefox Reader Mode + maybe some extensions, but power to them if that's how they want to spend their time.

HackerNews, where "configure+make" is "complex installation and maintenance mode"?

...yes? Since when is configure+make ever just configure+make? Even today I had to do a configure+make that involved another 20 min of debugging to figure out that I had to make a symlink to hack gcc's broken platform naming conventions. Like I get that some people enjoy that, but I personally don't. The reason I write software is to let myself and other people avoid having to jump through annoying, arcane hoops. Software should be accessible and user-friendly even to the most naive user. It is not a badge of honor to be comfortable with a highly finicky, complex system that requires extra time that could be spent doing other things that you'd prefer to do instead.

> when is configure+make ever just configure+make

Most times, for me, it just is. Or maybe a little 'configure -help' to set some options.

That said, I prefer to create Arch PKGBUILDs to encapsulate that work. Maybe that is more to your liking?

Perhaps the "non-uniformity" of the modern web is not the source of fun in the author's life.

Isn't that more likely than the author wanting an unfun life?

If the web did offer a uniform experience then the user would have more control how pages look. Text only, material design, native controls, your choice.

I'm a weird computer nerd. When I'm working in my frugal environment (with links, i3 and whatnot) and I switch to something mainstream (e.g. Windows 10), I get a feeling of relief.

However, when I switch back from the mainstream to the frugal environment, I also get the same feeling of relief.

In the frugal environment, I feel very productive and creative. In the mainstream environment, I feel that I have more cognitive bandwidth to just get stuff done.

Anyone out there feeling the same?

Perhaps you can analyze this a bit more, I have no idea if you also tile in windows or if you have a bunch of random window sizes floating around the desktop or if you full screen every window etc. Or is it a difference in font quality, scaling, resolution? Or is it a difference in things just working, less time spent tweaking config and dealing with issues?

Tiling in i3 must feel somewhat constraining after a point, maybe experiment with switching between two different modes/WMs?

I don't do anything to my Windows. It's mostly vanilla.

I think it has some thing to do with the loneliness of having a very customized environment. It's good, but when you work in a mainstream environment, it feels like there is a lot of energy being put by others into the same environment, which (sometimes) will get you high quality defaults and save you some time.

We like ascetics going in the wilderness to live in caves. So, without kidding, go you! But some screenshots of that would have helped the article.

I wonder if the Stallman setup of getting emails wouldn't be about the same given the amount of proxies used.

I actually had an intern implement something like this long ago. We had a web crawler and what better way to test it than to hook it up to email then rewrite links so it emails them too. It was OK for a few days.

Do we like them, or do we use them as helpful reminders to not take the luxuries of our modern times for granted? (with a heap of respect for forgoing them thrown on top)

I also find this comment deeply ironic given that the post argues against images on the web, but screenshots would have helped.

The post argues against images on the web which are "advertisements of content" and "made to take over your attention and again". A screenshot of how a popular page is experienced (eg. HN: I wouldn't expect that to be bad at all) here would be content, I think.

The post argues for turning images off. Period. Because they are not useful and break the author's ideal "uniform" web page. He then mentions he might turn them on for sites that are useful (like wikipedia). He even explicitly states that he thinks that his view will be a giant controversy.

How would you possibly know if the images on this page would be informative content, or advertising attention grabbers, when they're all turned off?

Perhaps it's about prejudices we have when we interpret the content we read.

I get a dislike for "decorational" images like a photo of a random bridge over a river (c) AStockPhotoSite when talking about building a new bridge in the town: that ain't content. An illustration of the bridge to be, an architectural depiction, would be considered content.

Wikipedia seldom has images of the former type: they are all there to expand the content being represented.

So, I've read the article according to my biases, and I do not see such a strong opinion against images: to me it reads not as if they might, but rather that they do enable them on Wikipedia.

The Links site they link to has screenshots under Features [1]. It looks like something from the mid to late 90s.

[1] http://links.twibright.com/features.php

For anyone else confused, this is not talking about Lynx[0] which is a text mode browser, but Links[1] which is a GUI browser (same pronunciation, different spellings).

0. https://lynx.invisible-island.net/lynx.html

1. http://links.twibright.com/

Links has a text mode as well, but it sounds like OP is using the GUI mode.

[0] reminds me that I need to update https://lynx.browser.org/ for the new version number.

I am still looking for a decent CLI experience to replace my browser. The only reason I have X11 is Firefox, and of course, everything that comes with it: Slack, MS Teams and JIRA.

I dream of not having to deal with X and Gnome ever again.

I think you can use Slack and MS Teams thru Bitlbee and your fav irc client.

I have used links-x11 as my main mobile browser back around 2009-2012 when I used it on Openmoko Neo Freerunner :) It was surprisingly usable back then!

Great. But I wish somebody would get this python based web browser back to work. I miss it.


A friend used to work for a porn hosting company doing account maintenance and general sysadmin. He said that's how he learned how to use Lynx.

Nice story, but Lynx != Links :)

Unless of course he's got Lynx confused with ELinks, which I feel like is bundled with most RHEL/CentOS deployments. I became acquainted with it on a previous job that included the nightmare of trying to help remote customers setup printers via the CUPS interface when I only had SSH access to their box.

You could do port forwarding...

Or X forwarding to use a nice(?) GUI printer config tool (or run Firefox or Surf or whatever) if those are already on the remote machine.

Depends a lot on the connection speed though - sometimes, I've found it's faster not to go down this route.

Ah, good old links2 -g

Why wouldn't you use lynx instead of Links?

I've used lynx for almost 30 years but it is simply no longer capable of rendering the modern web in a useable fashion. It is however highly useful as a file manager and for stripping and importing web text via -dump. Also the best available gopher browser...

>rendering the modern web

Maybe the modern web needs to change...not your browser.

Links has a GUI mode, so it can show images, colours, etc. I don't really get the point, though; Firefox with uBlock Origin seems much more practical.

Links had a much better UI, last I checked. Or at least more-discoverable. IIRC it rendered pages a little better, too.

Why don't people use elinks?

I did before switching to eww. Elinks is pretty nice.

Never heard of that one. Will check it out.

links also supports mouse pointer.

I don't just "attempt". Links, preferably no graphics (VGA textmode, no X11), has been my main browser for over 20 years.

Recent versions of links should remove the DNS prefetch code.

More menu items should have single key shortcuts. For example, Save formatted document, Flush all caches, Kill all connections and Submit form.

There should be a single key for toggling to html-numbered-links like there is for toggling to displaying images ([IMG] if no graphics).

I'd love to use a browser like this if I could download an installer. This download page brings back a lot of memories: http://links.twibright.com/download.php

I swear I spent most of my foray into undergrad CS trying to get third-party software to compile on my machine (i.e. wasting a lot of time).

I mean, what's wrong with downloading it with your distros package manager?

Well, I would, but I would be nervous about wasting "a lot of time" given the large-font warning. Have you ever run into the issue of installing something from an out-of-date package manager repo?

That web page probably hasn't been updated in decades. I just downloaded and ran the latest version of links and it took 5.3 seconds.

Maybe they're using Windows? That's the only case I can think of.

"If you want to install Links immediately, proceed step-by-step according to the following instructions. Otherwise you will waste a lot of time."

First I just tried "brew install links". It worked but built it without graphics support.

So I bit the bullet and tried to build it myself. Then I got to needing an X server installed and gave up.

I've grown too impatient and spoiled for this sort of quest.

In a similar vein, Lynx is probably my most used browser. I use Newsboat[1] as an RSS reader, set to use Lynx custom keybindings (to make it more VI-like than the VI setting) when opening links. It works surprisingly well.

[1] https://newsboat.org

links gui is my default browser for opening url on desktop, as much as one can have that on fedora lxde with wine apps mixed in. (i actually have at least three "default browsers".)

links gives me a preview of the page in usually under a second, opening a fresh process and all, on a 5yo budget thinkpad.

as a bonus, twitter refuses to work with links, so even if i am tempted to open a twitter link, it just gives a 403, and i don,t have to read whatever mainstream crap is on tv this week.

I'd like to have a group / subreddit whatever talking about this topic. And low fat computing too (I think it fits the view of this article).

Personnally I used dillo a lot, it's so god damn lean and instantaneous.. it's crazy. I wanted to add sqlite and lua as a scripting language to make it open to extensions but I got stuck :)

In theory you can build[1] the text renderer on top of the Servo web engine. Something like elinks 2.0. It will allow to browse modern Web while being lean.

[1] https://github.com/servo/servo/issues/24162

Switching to using exclusively reader mode on sites made surfing the web a much more consistent experience for me.

Links seems like an odd stopping point. It used to support JavaScript, it supports images, but only HTML 4.

Serious question, what’s new in HTML 5 that would be useful in text mode? Even with image support <picture> doesn’t seem like it would be useful since that’s mainly about handling art direction and media queries and stuff that seems much less applicable there. Gotta say the idea of <video> implemented with libcaca does sound really funny though.

Links isn't a text mode browser - I think it will display videos and pictures.

Form controls would be an obvious example of something relevant I think.

I guess I was also confusing it with elinks, whoops. Maybe we should go back to the days when things were named like “Joe’s Editor”

Links v2 has both text mode and X mode.

Links <= v1 and its forks (elinks) have only text mode.

I rarely give recommendations on anything software-related, but if you are an AMP-hater who also appreciates text-only websites, try using links to view some AMP sites. In almost every case I have seen, the result is quite good.

Now if only we could have that result without AMP.

"CSS causes the internet to become a baroque set of arbitrary design decisions, and does not contribute positively to the general experience." <- Well said.

Noscript and pi-hole do a good job of providing me with a distraction free browsing experience.

All the more focus for posting to the most distracting site of them all, HN :)

That page loaded instantly on my PC, even though it was served from the other side of the planet. And it actually looked pretty good. So much for the "modern" web…

Whole page is around 5KB with three requests, one of which is the favicon. Browsers are fast when you just give them (mostly) regular ol’ HTML. The network, rendering nutty CSS with animations and gradients everywhere, rendering SVG, “hero” videos, giant PNGs, putting JavaScript between the browser and rendering HTML—those are slow.

>"Many browsers today are gigantic resource hogs, which are basically VMs for various web applications..."

Other code can run inside of a browser VM as well, including but not limited to: malware, spyware, surveillance capitalism apps, tracking, and other privacy-violating code/apps, and with unpatched flaws and zero-days, privilege escalation/bypass-the-browser-to-run-directly-on-your-OS-code can run, which can be inception points for worms, viruses, and all other manner of unwanted software...

>"On the other hand, Links is a HTML browser."

Thank God someone understands the dangers of modern-day browser VM's!

>"Links is a graphics and text mode web browser, released under GPL. Links is a free software.

• Links runs on Linux, BSD, UNIX in general, OS/2, Cygwin under Windows, AtheOS, BeOS, FreeMint.

• Links runs in graphics mode (mouse required) on X Window System (UNX, Cygwin), SVGAlib, Linux Framebuffer, OS/2 PMShell, AtheOS GUI

• Links runs in text mode (mouse optional) on UNX console, ssh/telnet virtual terminal, vt100 terminal, xterm, and virtually any other text terminal. Mouse is supported for GPM, xterm, and OS/2. Links supports colors on terminal.

• Easy and quick user control via pull-down menu in both text and graphics mode, in 25 languages.

• HTML 4.0 support (without CSS)

• HTTP 1.1 support

• Tables, frames in both graphics and text mode, builtin image display in graphics mode

• Builtin image display for GIF, JPEG, PNG, XBM, TIFF in graphics mode

• Anti-advertisement animation filter in animated GIFs

• Bookmarks

• Background file downloads

• Automatic reconnection in case of TCP connection breakdown

• Keepalive connections

• Background (asynchronous) DNS lookup

• Possibility to hook up external programs for all MIME types, possibility to choose one of more programs at every opening.

• 48-bit high-quality image gamma correction, resampling and Floyd-Steinberg dithering in all color depths.

• Font resampling (antialiasing) for virtually unlimited pitch range, LCD optimization of fonts and images."

• Builtin fonts in the executable without reliance on any fonts installed in the system

• User-adjustable menu, HTML font size and image zoom factor.

• User-adjustable display gammas (red, green, blue), viewing-condition correction gamma and precise calibration of both monitor and Links on a calibration pattern

• Automatic aspect ratio correction for modes like 640x200, 640x400, 320x200 with user-adjustable manual aspect ratio correction.

• Support for one-wheel mice (vertical scroll), two-wheel mice (vertical and horizontal scroll) and smooth scrolling by grabbing the plane with a mouse (no wheel needed).

• Easy installation, the browser is just one executable and no more files.

My comments: Thank you for writing Links!!!

Obligatory mention: Lynxlet is Links for Mac OS. Terminal-based, pretty cool little packaging that does just his job.

[0]: https://habilis.net/lynxlet/

Edit: ah, my comment and my appreciation for the simple/straightforward design is very much in line with Lynxlet's mantainers, see their webpage for more [1].

[1]: https://habilis.net/

> Lynxlet is Links for Mac OS.

It seems to be Lynx, not Links (an entirely different browser.)

Oh my, you seem to be right. My mistake.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact