Hacker News new | past | comments | ask | show | jobs | submit login
An excruciatingly detailed guide to SSH (but only the things I find useful) (grahamhelton.com)
449 points by weeha on Aug 23, 2023 | hide | past | favorite | 112 comments



There is an amazingly simple directive missing here:

    # in sshd_config:

    AuthorizedKeysCommand /usr/bin/php /etc/ssh/auth.php %u

    # in /etc/ssh/auth.php

    $user = $argv[1] ?? '';
    $user = rawurlencode($user);
    echo file_get_contents("https://gihub.com/{$user}.keys");
This is obviously not production quality code, but just demonstrates the gist of the configuration. Basically, you can do a number of things, like verify the user is part of your org and in a certain group on Github. Then, if the user exists (and is rewritten via nss-ato or something), they can login to the server.

This saves a lot of trouble when off/on-boarding folks, since you can simply add/remove them from a github group to revoke or grant access to your machines.


Amazon Linux does something sort of like this, which I guess is 'production quality', meaning much more complex. It annoys me on older versions of Amazon Linux (2 and earlier) because it involves (among other things) an invocation of the openssl CLI to verify the format of individual keys in the authorized keys file that is hardcoded to use RSA, so you can't authenticate to Amazon Linux 2 hosts using ed25519 even though the version of OpenSSH on them supports it.

In theory it's kinda nice because it can let you do fancy things¹, but my actual experiences with it breaking basic functionality even for people who don't use those fancy things has ultimately made me trust Amazon Linux less.

It was especially frustrating because when I first encountered this, I was trying to SSH into a box owned by one of our cloud-first DevOps guys. I couldn't diagnose the box because I didn't have hands on it. He couldn't diagnose the issue because he knows AWS better than he knows Linux and didn't know where to look. He'd chosen Amazon Linux because it's by the owner of the cloud platform, so it must be 'more compatible', right? But here, 'more compatible' actually meant 'more full of stupid surprises'.

Bleh.

--

1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-...


Use AWS SSM with something like Leapp.


Thanks for the tip! I will probably learn and play with SSM at some point.

As it is, I'm much the opposite of our cloud-first DevOps guys: I know how to operate Linux a lot better than I know how to operate AWS. That makes yours a more intimidating proposition to me.

At the same time, this particular box was a shortlived VM that was already behind a VPN, my organization already has tentative plans to implement a different network access system than SSM, it won't be my project to execute, and I don't get to be in the room for architecture decisions related to it.


Things like this make me wish I wasn't in the network side (where we miss out on awesome shit like this because "the network isn't working right" is part our job).


The real life script runs a cron job and caches the keys for all users in a group. It always does a sanity check on the user, but if that fails it still allows that user to login (because we all know GitHub goes down) if they have cached keys.


Can the real life script be shared?


little typo there: "gihub"


brb buying gihub.com


Registered 2010, I think you’re late


Nice catch! Too late to edit it though.



Really handy for NixOS users!


You really should use ssh certificates for this instead.


Yeah, if you have PKI infra, that’s the way to go.


why php?


It’s the only language I could think of that could express a whole working example in only a couple of lines of code in a highly readable way.

Other languages I know would require imports/boilerplate that would distract (Go/C#/C/Python/Scala/JS) from the example. Bash might be more familiar with devops but less familiar to regular programmers — part of the illustration I wanted to make was that it didn’t need to be a devops thing. Also PHP (along with JS) is a language most devs know a little bit about, whether they want to or not.


Worked out beautifully imo. That PHP script is super concise and readable, so it's a great little illustration.


Can curl do the url encoding? Isn't there some new "variables" thing in the newest release?

Can valid Unix usernames have special characters that need escaping? I should know that by now.


Here's something I bet few people know: the OpenSSH configuration parser ignores duplicate directives; only the first such directive has any effect. This is more than a little counter intuitive as IME the more common semantic in configuration parsers and rules engines is for subsequent directives to take precedence over previous ones.

This may seem inconsequential, but IME when changing defaults in, e.g., /etc/ssh/sshd_config, people and software tend to append their changes to the end of a file or directive block, not the beginning, expecting those changes to be effective. Even security companies and organizations get this wrong, including various SSH bastion products I've seen. CIS Benchmarks recommendations (IIRC) and most (all?) third-party CIS audit suites don't consider precedence at all or get it wrong--e.g. by recommending appending a value, or by providing a compliance test that accepts a broken configuration. FWIW, the proper way to check whether an OpenSSH configuration directive is defined as expected is to use `sshd -T` or `ssh -G` to dump the derived, internal configuration, not by directly inspecting the configuration file(s).


> Here's something I bet few people know: the OpenSSH configuration parser ignores duplicate directives; only the first such directive has any effect. This is more than a little counter intuitive

This is how the sudoers file works as well. I think this is desirable in software that authenticates or authorizes users, and maybe more broadly wherever security concerns are essential. That's because this logic makes it easy to create settings that can't be overridden in by adding a new file in a whatever.conf.d directory: you define those settings in the main config file before you source whatever.conf.d/* and you put some kind of special protections on that file.

Even where you're not worried about somebody evading your controls per se, it can be nice from a configuration management perspective in giving you a soft 'guarantee' that if some new hire who doesn't have the whole picture adds a new file in there, or some package tries to install a stupid default for its own service, your baseline settings can retain priority.

In other contexts you probably see the opposite behavior because what you really want is not a 'baseline configuration' but a collection of defaults in the strict sense: fallback settings to be used in case nothing is explicitly configured by the user, developer, or administrator (as the case may be).


> This is how the sudoers file works as well. I think this is desirable in software that authenticates or authorizes users, and maybe more broadly wherever security concerns are essential.

I strongly disagree - the most important trait for security-relevant configuration is that it works like you'd expect.

> That's because this logic makes it easy to create settings that can't be overridden in by adding a new file in a whatever.conf.d directory: you define those settings in the main config file before you source whatever.conf.d/* and you put some kind of special protections on that file.

What is gained by this? The administrator controls both of these locations.

> Even where you're not worried about somebody evading your controls per se, it can be nice from a configuration management perspective in giving you a soft 'guarantee' that if some new hire who doesn't have the whole picture adds a new file in there, or some package tries to install a stupid default for its own service, your baseline settings can retain priority.

Instead the new hire or a package adds important settings that now have no effect. In either case you need to review the whole configuration - reversing the order of precedence doesn't do anyting except confuse users.

If you want to have a configuration file that has the final say then make that explicit - and that absolutely doesn't require the order inside that file to be reversed. If the subdirectory is explicitly included in the main config then you can add your overrides after that include anyway.


I gave my best effort at a rationalization for what we see in OpenSSH, but in light of some of my misremembered peripheral details and your arguments here, I've changed my mind.

Sometimes you do want to set a baseline, and other times you do want to set a default. But using an unconventional ordering for overrides in your config file format isn't the right way to do anything.

Thanks for laying this out.


I suspect OpenSSH's precedence order might be a consequence of the semantics and syntax of the Host and Match directives. But I haven't looked into the history.


I'm an idiot about sudoers and got this completely backwards


I think some directives can be duplicated -- like AllowUsers.

But I got bit by something related yesterday when NixOS suddenly changed the merge order and put my AllowUsers from own file below a Match from another file and locked me out :(


> The best way I found to remember this is local forwarding with-L means local is on the left-hand side of the address. Remote forwarding with-R means the local port is on the right-hand side of the address.

This is the most important, succinct statement made in this piece. -L and -R confused me from the get-go. Having which port instance L or R is "local" change is in some ways, annoying. I "get" that -L and -R change the direction of intentionality, where initiator and responder are, but I think it might have been sensible to make a port:address:port phrase ALWAYS refer to local:binding:remote and have -L and -R define which was listen and which was send.


I think it's easiest to remember that -L listens on a local port whose number comes right after, and -R listens on a remote port whose number comes right after.

Then the rest (the host:port) is just the normal way to tell where to connect.

Since we're doing port forwarding over an SSH tunnel, it's obvious that the host is contacted from the other side of the tunnel than where the listening port is.


A lesser known but quite useful bit of ssh is connection multiplexing. Rather than establish a new tcp connection, doing the auth dance, etc, you can tell ssh to reuse an existing connection. (The protocol itself has a notion of channels, a bit of metadata with every data frame to distinguish different streams, and this functionality uses that).

The big thing with it is that you don't have to do a full auth for subsequent sessions - nice if you don't have tmux (etc) on the remote, and do multiple panes via multiple terminal windows. Particularly when auth involves a passphrase and hsm touch or similar that can take several seconds.

It also has a "connection persistence" setting so when you're bouncing around between a handful of servers you don't have to auth each and every time you switch between servers.

Overall I think of it as one of those features that's nice to have, but not really life changing or anything - Some servers I connect to have it turned off and I notice it's absence more than I notice when it's working.

More info: https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing


It makes a huge difference if there’s significant latency between the client and server: ssh is a very chatty protocol, which has not been optimized to reduce round trips like TLS has been - apart from this multiplexing option.


This can make Ansible much more usable when going through (ProxyJump) bastion host.


If you have a lot of hosts listed in your ~/.ssh/config file, you can keep the file from getting too cluttered by using the Include directive, which supports wildcards...

    # in ~/.ssh/config
    Include config.d/*.conf

    # in ~/.ssh/config.d/work.conf
    host work
        hostname myoffice.example.com
        user myuser

    # in ~/.ssh/config.d/client1.conf
    host client1.dev
        hostname dev.client.example.net
        user someuser
    
    host client1.prod
        hostname prod.client.example.net
        user someuser


The host directive also supports wildcards.

For example, add `host *_work ` and then some stiff that is the same for all work hosts like host1_work.


There an additional trick - you can put include inside a Host/Match directive.

  # in ~/.ssh/config
  Host proj1.*.corp
    Include ~/.ssh/proj1.conf

  # in ~/.ssh/proj1.conf
  ...
This way, I can put project-specific matches at or near the top, while being sure I don't have to wade through numerous of individual files during review.


For forwarding I almost never do -f. It can be a footgun in making it hard to tell which forwards are still open or operational.

-t is a cool trick, didn't know about that one.

An important note that's easy to overlook in the ~ escape command list is you can nest the escape when in nested sessions (i.e. if you're not using -J for whatever reason).

Cool list, it definitely lines up with what I've found useful and had a few more.


-t is great, I use

$ ssh -t my-dev-vps 'tmux new-session -A -s main'

So each time I run it I'm right back where I left off.


Sometimes you have to use it twice if you're doing some (can't remember) bonkers stuff. I used to run commands on thousands of machines using a silly script I wrote.


> For forwarding I almost never do -f. It can be a footgun in making it hard to tell which forwards are still open or operational.

That kinda still is a problem when you have multiple shells open to the target server. I wish SSH exported it in any reasonable way aside from trying to get it myself from the process list...


If you avoid -f you can also chain commands knowing the second will only be run when ssh is finished running. Something I do often:

  ssh host.example -N -D 1080 && wall "Socks proxy to host.example closed."
The second command can be as complex as you want but a message in all terminals usually does the job for me.


There's a current pull request for adding AF_UNIX support, which should make all kinds of exciting forwarding possible, since it will make it easy to proxy ssh connections through an arbitrary local process which can do anything to forward the data to the remote end.

https://github.com/openssh/openssh-portable/pull/431


The one I am interested in is -D using AF_UNIX, but good to see everything possible working over AF_UNIX. It looks like curl as of about a year ago can use AF_UNIX SOCKS via the ALL_PROXY syntax socks5://localhost/path (or socks5h). It looks like this was added due to Tor using an AF_UNIX SOCKS proxy. I want it to be able to configure network access via standard unix permissions (and ideally, IMO, kicking TCP/IP out of the kernel entirely).


The SSH console blew my mind when I first saw it. A coworker showed me ~# and it felt like discovering some kind of secret cheat menu you'd see in a SEGA Genesis game.


Why tilde? Because rlogin, rsh used it.

Why did rlogin, rsh use tilde? because cu used it.

Why cu? Because if you had a modem or serial line, cu was the way you talked to it, to send Hayes codes, and you can't use Hayes codes breakouts because they will break to the modem, so you need a signal to break to cu.

Why not ^[ ? Because thats telnet. so if you had telnet to a host, to connect to the modem over cu, you needed a distinct break-back for cu, to not break back to telnet.

Its breakout syntax all the way down.

Also, its not actually tilde, it <cr>tilde


I thought it was CR, tilde, .<dot> ? have i been doing it wrong?


Tilde dot is the disconnect sequence for SSH. Other characters after tilde have other uses as seen in the post.


And if you're N connections, use multiple tildes, depending on how many levels you want to disconnect. I think.

It's amazing how switching jobs to one where you just write terraform and yaml all day gets your terminal skills all rusty.


There's a song about it, it's called "Ctrl up-arrow Q".


The dot command terminates the ssh session. There's a lot more uses for the escape sequence.


> ssh-copy-id

This section starts out talking about how the command uploads your public key then seamlessly switches to saying it uploads your private key (which I am guessing are typos).

Also that command does not just upload the key, it appends it to ~/.ssh/authorized_keys which is considerably more useful.

Finally, in the ssh-keygen section, ed25519 is from everything I’ve read preferred these days to ecdsa.


Some years ago, I read a post on HN where someone made a text-mode game(? or something similar?) available through SSH. People could play the game by opening an SSH session and play from their terminals. This was non-trivial, and they explained all the ways they configured sshd to prevent players from running binaries other than the game.

I didn't bookmark that post, and I haven't been able to find it again to my great dismay. If anyone remembers this post and still has it, I'd love to read it again.



It's not that, but still very cool!


> they configured sshd to prevent players from running binaries other than the game

You might be interested in how to limit users to nologin shell coupled with subsystem access:

https://news.ycombinator.com/item?id=3527754


None of the links here seem to be the one post that I found years ago, but this one in particular is pretty interesting.

I found this[1] one while looking around, the thread has some additional interesting remarks, including an LD_PRELOAD attack vector.

[1]: https://news.ycombinator.com/item?id=3550944



I'd add to this list that in 2023 you should be securely storing your key in a HSM.

On Mac, that's easy to do via the Secure Enclave: https://github.com/maxgoedjen/secretive


For people using 1Password you can get it to act as an agent to lookup keys directly in your vaults. Again, great integration on a Mac where you can use your fingerprint each time the key is required.


I've been hacking on `ssh-tpm-agent` which allows you to create or import TPM sealed keys. This is practical as it prevents key extraction and it has dictionary attack protection which allows you to have 4 digit pins instead of passphrases to protect your private keys.

https://github.com/Foxboron/ssh-tpm-agent

Currently hacking up better support for `HostKeyAgent` and `HostKey` for `sshd`.


It's never enough for you walth, is it?

I used to use an easily memorizable password, but you said that was wrong, and set me straight. Now my password is so complex, I have to rely upon a 3rd party service, that keeps getting hacked.

Then you insisted I use keys. After, you became irate if I left the keys on my work dir.

Now you want me to lug around a 2U HSM appliance?!

For shame!


Physical access only. The server is air-gapped.

And there's a big dog chained to the desk beside it. Biometric security, you see: if you don't smell right to Brutus, you don't get to log on.

Look at our security rituals from an outside view: we sure do seem to spend a lot of time propitiating our idols of one kind or another.


> Now you want me to lug around a 2U HSM appliance?!

If you don't need a certified HSM that generates keys on device (and you don't, right? You can generate keys on a ramdisk from live media with no persistence and no/encrypted swap), you can use basically any PGP smartcard, including nice little USB ones like Yubikey and NitroKey. And even if you do you can get a little USB HSM.


Can't / shouldn't lug 'em around, some of these boxens have shock, temperature, movement failsafes. Anti tampering you see.



One more useful trick I'm surprised wasn't mentioned since -D and -R both were: if you do "ssh -R 8080 somehost", that does dynamic port forwarding just like -D, but on the remote end instead of the local end.


The remote port forwarding example seems wrong. It's specifying the loopback address which would be pointing to vuln-server (where we are connecting via SSH) and not internal-web, right? How is vuln-server accessing the site hosted on the loopback of internal-web?

Edit: Okay now I see that command is supposed to be run from internal-web and not campfire. I guess you would also have to ProxyJump through vuln-server to internal-web to even run that command!


This is really cool stuff.

I just wanted to mention that not only does it have subsystems like sftp but you can make up your own subsystem (i use parameko!) to so just about whatever you want. Cool stuff like exposing the remote host's sound or random devices over ssh, cool BBS displays/apps or for the sneaky: command and control protocol between your malware/implant that proxies normal ssh for normal ssh clients as usual.


Tangentially related, I love that VSCode has an extension that lets you ssh into a remote host folder and treat it like a workspace. Most useful thing ever.


In emacs land “tramp” is the thing that does this. It is pervasive enough that if the files you are editing on some remote host are in a git repository, then magit just works, same as on the local machine (mod latency). Tramp is the modern-ish version of this facility, I think a little more than 20 years old now? There was a predecessor that I used back in the 1990s whose name I forget, which allowed remote editing of files and browsing remote filesystems with dired, but not remote commands as needed for features l like compilation-mode.


I can't read this unless I hack up the CSS to not be dark text using a thin typeface on a black background. I could edit the CSS so that it's normal fonts with higher contrast colors, but instead I think I'll go "someone made something on the internet and I'm not the audience, good for them, but I'm closing this tab again."


Doesn’t your browser of choice have some kind of reader mode?


Unfortunately, reader mode doesn't magically change the colours in all those code fragment images.


`-g` was new to me. I believe I have done something similar by providing an explicit bind address to -L, like this:

ssh -L 0.0.0.0:2222:10.0.0.1:22 host

I think this will bind to 0.0.0.0:2222 (allowing remote hosts to connect) and forward all traffic to that port to 10.0.0.1:22 (from the server's perspective).

The biggest gap in this collection of tricks (IMO) is SSH certificate support.


Maybe (hopefully :)) also interesting for those who enjoyed this article: https://johannes.truschnigg.info/writing/2022-07_advanced_ss...


One thing I would find interesting would be how to read someones private key or agent from ram. For example, when ssh agent forwarding to a machine, the root could extract that agent probably.


I don't believe a remote host that has access to your forwarded agent can extract the keys().

But they can tell the agent to authenticate with any key loaded in your agent, not just the one you used to ssh into the machine you forwarded your agent to

So e.g if you have a distinct ssh key for GitHub and a different one for all other uses and you ssh to a compromised server with agent forwarding, the attacker can then ssh to GitHub as you.

() there was a vulnerability not too long ago involving getting the remote agent to load arbitrary shared objects for remote code execution, which obviously changes things


So if I have access to the agent and can authenticate with it at another remote location, then if I control that remote location I should be able to grab the key at _some point_ during this process. I may have to implement my own ssh server to do so but it should theoretically be possible, no?


Not as far as I'm aware, at least not without some really clever attack. The design is such that all you're doing is asking the remote agent to use it's private key to sign things, it never actually exposes the private key. Your agent doesn't even always have access to your private key- it could be on a TPM or Yubikey where the key never leaves the device. you are just passing around a connection to your agent that is able to authenticate with a private key.

I would not be surprised if there is some way to attack this into getting a private key, but it would either be a direct attack on the agent code like sending it malformed messages to somehow get remote code execution to then read the key. Or some more complicated attack on the cryptography where you repeatedly force it to auth and can somehow use the results to reduce the key space needed to brute force the key - along the lines of a known plaintext attack.

But I'm also just a hobbiest here who has looked a little into the security model but I am by no means a cryptography expert, so take all of this with a grain of salt.


Hard to do with secure enclaves. You should protect your agent on your local machine to not allow requests willy nilly if the machine you ssh through is part of the threat model. You may need to rethink wether to use agent forward at all if that is something you need to worry about.

There are alot if details in this that can go wrong. I seldom use agent forward in unknown/undesigned environments because of this.


I'm not sure about extracting keys from remote agents, but using a remotely forwarded agent is just like using a local agent. Just point the client in to the right socket.


Random tidbit, the -g option isn’t global port but rather gateway port. Also don’t forget to enable gateway ports in sshd_config


Another member of the intergovernmental agency domain squatters club.


For those wondering what this is referring to it's the use of ".int" for the TLD of the examples. ".test" and ".example" are the only really good reserved TLDs for this (".local" isn't quite what people think it is).


Example.com

People should just use example.com, it’s reserved and the most obvious to the end user that it’s for example purposes.

https://en.wikipedia.org/wiki/Example.com


Example.com is certainly acceptable as well but it does have a somewhat negative side effect of being an actual running, pinging, sometimes-service-responding host which means if people paste the documentation verbatim it may either actual execute or take longer than normal to timeout (ssh would fall into the latter). ".example" is reserved for the same purpose but does not have any registered domains, on top of being a few characters shorter for when you need more than 1 name in your documentation.


Doesn't work if you need to refer to more than one domain.

.example is also reserved for this exact purpose.


It does work when you use subdomains. And there’s also example.org and example.net.


> It does work when you use subdomains.

That's true but most people don't understand what subdomains are, so we're back to being (more) difficult for laypersons.

> And there’s also example.org and example.net

Well, that's a good point. There are options.


I think in the context of SSH, use of subdomains is very common and to be expected.

As far as laypersons are concerned, I would worry that they wouldn’t recognize foo.example to be a domain name, as opposed to www.example.com or blog.example.com.


I forgot we were talking about SSH :)

But in your example, you're comparing different structures. "www.foo.example" is somewhat more clear.


Well, it was you who didn’t want to use subdomains. And with subdomains, we can just stick with sub.example.com/net/org.


> ".local" isn't quite what people think it is

Whereas ".intranet", ".lan", and some others are what (some) people think ".local" is[1].

[1] https://www.rfc-editor.org/rfc/rfc6762#appendix-G


I am not sure what you are hinting at with your side comment about .local, but I came here to say that I pretty much love .local. Switched all my home machines to the systemd-resolved stub with enabled mDNS. Feels so much nicer then maintaining your own DNS or hosts files... Yes, there is some initial lookup delay, but thats totally fine for private use methinks.


>I am not sure what you are hinting at with your side comment about .local

They are probably referring to its special status and special handling.

https://en.wikipedia.org/wiki/.local

>The Internet Engineering Task Force (IETF) reserves the use of the domain name label .local as a special-use domain name for hostnames in local area networks that can be resolved via the Multicast DNS name resolution protocol.[2] Any DNS query for a name ending with the label local must be sent to the mDNS IPv4 link-local multicast address 224.0.0.251, or its IPv6 equivalent ff02::fb. A domain name ending in .local may be resolved concurrently via other mechanisms, for example, unicast DNS


Sadly it’s more complicated than that.

Back in the 1990s, Microsoft distributed a Netware-killer version of NT called Windows Small Business Server. It was aimed at companies that might at best have a dial-up internet service. SBS, being based on Active Directory and Exchange (etc usw) required a domain name, but back then you needed considerable arcane knowledge to register an Internet domain name, which most SBS users lacked.

So Microsoft recommended that their SBS users should pick a name under .local for their AD domain name. I will not relive the many hilarious fuckups this caused, especially when Exchange was trying to use POP3 for incoming email. [tedu’s comment upthread reminds me of an incident when I spotted a company that was clever enough not to use .local for their AD, but not clever enough to understand that corp.int is not an internal subdomain of corporation.com.]

(What MS should have done, instead of squatting on a name that might get created as a real TLD in the future, was tell their customers to make up a subdomain of a properly registered domain of MS’s own; if MS’s customers wanted to turn their fake domain name into a real internet presence, MS would have had a ready-made lever to turn their customers into subscribers. But that was 15ish years before MS realised Azure might be a good idea.)

OK, so part two of this story is the early years of Mac OS X when Apple needed a replacement for AppleTalk that worked with IP over Ethernet. The main gap that needed filling was zero-configuration service discovery, which AppleTalk had enjoyed forever and IP lacked. The solution was called Rendezvous or Bonjour (I forget which name replaced which) and multicast DNS was a foundational part of it. Apple did an incredibly effective job of getting other vendors (especially printer manufacturers) to adopt the new protocol.

HOWEVER, Apple needed to choose a domain name for mDNS so that names of devices on the LAN could be distinguished from names out on the internet. They chose .local because a LAN is a local area network.

Hence, hilarity ensued. So much confusion and failure to interoperate because two large corporations failed to appreciate the importance of a global shared namespace, and foolishly chose the same cute name to mean completely different things.

Possibly the saddest episode was when Apple were in the process of turning mDNS from a de facto standard (with multiple implementations across multiple vendors) into an IETF standard. MS tried to derail the effort by persuading the IETF to spin up a working group to develop LLMNR, link-local multicast name resolution, TOTALLY NOT mDNS HONESTLY. Surprising no one, there was zero interest in replacing a successful working deployed protocol with slightly differently shaped vapourware. Rough consensus and running code wins again.

The upshot of this is that the IETF has a lot of institutional trauma and scar tissue around the question of non-DNS domain names. (see also .onion and others)


I know at least one case of a Microsoft Consultant who suggested to use company.local for a new Exchange setup for roughly 5k employees. Unfortunately, his suggestion was actually implemented.


one thing i haven't been able to figure out with .local is that with multiple connections (wire to the lan, wifi to the internet, and vpn to the office network), the computer seemingly picks one of them at random for its .local address. i don't know how to fix that without setting each of them manually.


I switched to using .home.arpa[0] on my local network last year. Looked funny to my eyes at first, but it seems normal to me now.

[0] https://datatracker.ietf.org/doc/html/rfc8375


mDNS is the right way, the comment was directed towards the common mistake of thinking it's like .lan. or .home.arpa.


Great article that collected a lot of info that I usually wind up looking for separately.

But I have to ask: do people really find color schemes like this easier to read? I'm squinting at it throughout.


> I'm squinting at it throughout

Same here. I have anecdotally noted that many people, like myself, who were around in the days of actual green screen terminals and later green monochrome monitors are far less likely to prefer dark mode than younger people.

For me, the advent of color screens that made black-on-white text possible was a huge improvement in terms of readability and eye strain reduction, and I cannot imagine going back to the old ways.


As a younger person who strongly prefers black text on white backgrounds, I talked to some of my friends who use dark mode and discovered that they almost exclusively use their screens in dark rooms. Scrolling through their phone in bed with the lights off, for example. And younger PC gamers typically leave their room light off when they're on their computer.

The constant "light mode hurts my eyes!" never made sense to me until I tried using my computer in a completely dark room. So I think the trend over the past decade isn't really dark mode itself, it's more people using screens in dark rooms.


I am a data point towards this. I use dark color themes in my living room, and light-ish color themes on the balcony.


Was around in the tail end of the glory days of monochrome green/amber terminal. Still love me some dark mode. I'll admit though I've never understood light gray on black when other options stand out better.


Amber was better than green.

And as someone else mentioned, dark rooms may be part of it too. I find light backgrounds less obnoxious in brighter light, but i avoid brighter lights by preference.


I grew up on green CGA, goldenrod Hercules monitors and grayscale mono VGA ones.

Indeed, for the past decade I also prefer the light themes, though I find these dark themes pretty usable too: https://protesilaos.com/emacs/modus-themes-pictures

As I've noticed, the main problem with dark themes is the low contrast usually. These modus themes were designed scientifically, to meet the contrast ratios recommended by the Web Content Accessibility Guidelines.

Also, back in those days mono-monitor days, people just kept the brightness at the same level they used during the day and complained how tiring is it for their eyes to "use the computer for so many hours".

My eyes never got tired, because I was constantly adjusting the brightness to match the ambient lighting and I've drastically lowered it, when I was coding in dark.

I've noticed that dark-theme users (eg during live streaming) crank up their brightness/contrast to compensate for the low-contrast dark themes, then they are surprised to be blinded when they open a webpage, which is extremely likely to be light themed and they keep complaining about light themes...


> I've noticed that dark-theme users (eg during live streaming) crank up their brightness/contrast to compensate for the low-contrast dark themes

On the whole, this actually allows you to achieve the same contrast level with lower total light emission. If you are in fact clinically light sensitive and you also have difficulty with low contrast, you just can't achieve the balance of contrast you need as effectively with light themes.

> then they are surprised to be blinded when they open a webpage, which is extremely likely to be light themed

For me it's well worth it to sacrifice the 'artistic integrity' of web designers' intentions and use something like Dark Reader or Midnight Lizard to force a dark theme across the whole web.


Those CRT screens also bled horribly (or amazingly) and it is not at all the same as pencil-thin text on a retina display.


> But I have to ask: do people really find color schemes like this easier to read?

No, and also not the monospace font for prose.


> do people really find color schemes like this easier to read

I simply turned off style sheets in Firefox (Alt+V Y M [View menu->Page Style->No Style]) to get rid of the website color scheme and have it render using the colors I have set as default in Firefox.


Do you mean the color scheme of the web page itself or the color scheme of the terminal in the screen captures? They're pretty close so maybe the distinction doesn't matter. The former seems more legible than most pages I see posted here since most of them have poor contrast. The latter is approximately what I use in my own terminal, except my text color is closer to that of the text editor screen capture than any of the shell ones for the same reason of contrast. I've pretty poor eyesight in general though so perhaps there's something to that.


> do people really find color schemes like this easier to read

That scheme works great for me. Not just "dark mode," if it were my site i'd have made the colors more neon-ish.


I find the color and the font hard to read... Guess we are all different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: