Keystroke timing has been a concern for terminal I/O since the 1980s and folks were using primitive encryption with stelnet and kerberos.
Most terminal applications use buffered I/O for password entry, which is still an important security feature. In that mode, nothing is sent to the other end until the user presses return. A MiTM only "sees" one packet no matter what, and with padding they can't even infer the password length.
For a time, there was rich pickings in applications that accepted passwords in unbuffered mode. Many of them doing it so that they could echo "*" symbols, character by character, as the user typed. That simple feature looks cool, and does give the user feedback ... but would leak the keystroke rate, which is the last thing you want on password entry.
I hope we preserve buffered I/O for password entry, because it's still better than what ssh can do with some obfuscation. But it's great to see ssh add this, and will help protect content that can't be buffered, like shell and editor input.
Aside, I've noticed that the current technique of rendering a fixed number of asterisks independent of the password length is quite confusing to users -- "that's wrong, it's the wrong length", resulting in attempts to type in the "correct" password and this obviating the benefit of the stored password.
Not sure how to fix that. I recall a visible hash of some form being used in the past (eg take a 2-digit hash, pair of with a smiley; I must have entered it right, it's showing me ROFL smiley), but that would aid shoulder surfed password entries, at least.
I've seen a GUI password input field that mutated an abstract line drawing on every keypress. Think random cross-hatching over the whole input field where the lines are nudged a little on every press.
(Not that that's necessarily a good idea, it still gives away timing/length information to e.g. cameras.)
At IBM someone made something called fetchnotes that worked like fetchmail, but worked with mail and calendar, made me able to get away with minimal usage of notes.
I think it was to provide an indication that the password was correct at a glance. (IIRC the number of dots in the password field was also generated, so it didn't necessarily match the number of chars)
The image was essentially a simple checksum. Each user would eventually memorise which icon was "theirs".
I'm honestly seeing little value in asterisks with WFH and the move to passphrases. Feedback is important when you're typing a long phrase with complete precision. Plus shoulder surfing is simply not a thing when my physical security profile now involves a locked front door and a call to the police.
WFH also means Working From my backyard, the coffee shop around the corner, the library, a friend's house, a hotel room, etc.
Even for people who only work at home while working remotely, private homes can see a lot of traffic. I wouldn't assume all screens are kept and used in totally secure environments so we should probably still stick with masked passwords and telling users not to keep passwords written on a post-it note stuck to their monitor.
And now employees simply leave their laptop open with the SSH window up while getting their coffee because it's now so annoying to close the lid and correctly type the password.
> I would hope people in high leverage job roles would just avoid such behavior.
I used to hope that as well. Then I met people and lost that hope. It's truly impressive how much stupid shit gets pulled by people that "should know better."
Unlocking the password manager means I need to type a master password in while in a public place. Feels higher risk when it is an unimportant website but potentially gives access to all websites. Still better than the passwords being accessible on disk but having individual passwords would reduce the impact of any password leak.
I have this InputStick USB [1] dohicky that I keep with my keys shows up as a generic USB keyboard when plugged in but is also an encrypted Bluetooth dongle (part of pairing allows you to configure a shared encryption key so that only devices that know the key can use the stick, and only sticks with the key are recognized by the client apps). There's a plugin to Keepass2Android that I use to type passwords from my phone. I use that to unlock my password manager (using a giant untypable passphrase). So entering mosterous passphrases is very easy... bot only if you can unlock my phone and use biometrics to open Keepass2Android.
It really is dumb that phones can just generically play USB HID (without running custom kernels)
It's every two weeks. If your threat model involves being spied on over the shoulder for your master password while in a cafe you "just" need to ensure you enter your password in a safe location every two weeks.
What if you're demonstrating a problem with a login screen? And yes, I've had to do exactly that more than once.
I wouldn't do it with a particularly sensitive password (online banking etc) but there are enough passwords I use regularly for work purposes where it wouldn't be a significant risk for others to watch me type it in, certainly if the characters aren't revealed at all while typing. Though having password fields be able to detect your screen is being shared automatically and obscure what pixels are relayed would be nice.
They're typically passwords that are only for testing accounts anyway, and that are known to the team members I'm sharing with.
But...it's easy to slip up now and then and forget you're actually putting in a password while screen sharing that it's probably best not to have your co-workers know! Obviously the worst is your actual O/S password, as knowing that could potentially allow a co-worker access to other passwords that are quite sensitive, but I'm not sure it's even possible to screen share your O/S login screen - probably shouldn't be! It is a good argument for not re-using that password for any browser-based logins, but SSO policies tend to make that impossible unfortunately. Mind you I use a pin for my O/S login screen, whereas for browser-based logins you can't.
Sadly I think security systems will have to accommodate the possibility that someone else can see your screen. And hope that they can't see your keyboard.
We used Notes at work until a few years ago and it still had it IIRC. I never stopped to think about why the pictures changed, that's interesting. Another annoying decision is that they prevented pasting passwords, which is very inconvenient when using a password manager. I ended up having to use one that simulated keystrokes.
The browser could use a different rendering convention for autopopulated passwords. For instance, it could render a solid black bar (no characters for the user to count) or maybe the phrase "autofilled", perhaps with a strange background color / rendering convention.
> Keystroke timing has been a concern for terminal I/O since the 1980s and folks were using primitive encryption with stelnet and kerberos.
I had a visual basic AI addon in the 1990's that could work out who was typing at the keyboard from their typing pattern within a few minutes of typing, which kind of rendered the logon process mute.
Today, that can applied to touchscreen logons by tying finger pressure patterns ie size and shape of finger contact with the touchscreen to a user, and when incorporating swipes or mouse movements in the desktop OS context, its possible to have a security app which can lock a system if someone is using a device and user account which is not theirs.
At the very least you can log every time one's GF/missus has gone through your phone.
No, the meaning of "moot" is clear. It simply means a question that at the current time has lost its relevance, or that at the current time has just become the only question that is relevant. Easy.
I dont know, I dont think we even had that much access to tune it so to speak, this was VB4 (1991) back then, I dont think it was a VB extension but an OCX (1994) which is OLE2/ActiveX technology.
I think I got it from a 3.5" disc on the front of a computer mag from memory in the UK but it was a US company that wrote it. So there might be a copy of it in the wayback machine.
It was quite simple to use, so alot of their AI decisions or tuning was probably already made for us in order for it to be put out there as an addon.
But I have never seen anything since for an addon, and it seemed such a good idea in the scheme of things when it comes to computer security with all the hacking that is in the press today.
That is not the only time you use passwords over ssh, e.g. I don't use a password to remote into my desktop from my laptop, but I do use one when using sudo on the desktop.
Actually this is something that is relevant to my interests.
I prefer to have sudo ask for a password when I'm physically in front of the machine, but not if it's a remote session (e.g. SSH from my laptop to my desktop).
Maybe the SSH agent on the client can re-authenticate to the server when requested?
Note that this is a bad idea from the security standpoint, as it requires SSH agent forwarding. Which means that, if the remote server is compromised, the attacker can use your SSH agent to log into other servers as you.
I was talking about the GPG agent, so that the key on the smart card can be used to for sudo elevation on the remote host. This usually requires user interaction with the key, so just having access to the agent wouldn't do much. I don't think the ssh agent would help with this.
To your point, I wonder whether that consideration holds when the private key is held on an external device, like is the case with a YubiKey. I use that setup, and I can't add the key to the ssh agent.
$ ssh-add .ssh/id_yubikey_gpg.pub
Error loading key ".ssh/id_yubikey_gpg.pub": error in libcrypto
Don't these apps just use PAM? Since the initial complaint was about sudo, I'd figure pam / polkit would handle this, and apps would call those to obtain privilege elevation.
FWIW, you can probably configure sudo to use something other than passwords. On a Mac you can use the fingerprint reader for example, it's just disabled by default.
And your terminal may come with a password manager too, which would be unlocked with whatever means.
Again, on a Mac with iTerm you can do this with a fingerprint.
Same way you'd get the password? It's either a physical or virtual server you more or less control, in which case the siblings' answers apply. Otherwise, it's probably some kind of image or something someone else controls, in which case bake in or send them your public key or certificate (if you've got colleagues in the same situation as yourself).
The password needs to be generated somehow, right? Assuming you don't you use a pre-baked password that repeats across machines, you could replace the password generation and retrieval with deploying a public key instead.
The remote system must generate its own SSH private key; you could use that opportunity to deploy the authorized keys before sealing the system as read-only.
No, it's assuming a device running a ssh daemon with something mounted rw or user-modifiable[0] that can hold an authorized_keys file. A NetBSD embedded board that configures sshd with `AuthorizedKeysFile /sdcard/config/authorized_keys` would be fine, for instance.
[0] For example, you could let the user write their key to an SD card and then mount it ro on the device.
"One time, on first use, where absolutely necessary, and changing password immediately afterwards" seems a reasonable interpretation of "approximately never".
I don't know. I come across old AP/routers where I've forgotten the login credentials and find myself hard resetting them with some regularly, one that's above "approximately never" anyway.
I'm presuming the hard reset is to a factory-assigned password.
Is that uniform across all devices, or device-specific?
Practice I've seen for some years now is to have a label on the device with admin/root password, which is presumably neither uniform across devices nor trivially-determinable from device characteristics (e.g., MAC address, sequential serial numbers, etc.).
I'd still consider that practice reasonably tolerable, though you should be keeping better tabs on assets and credentials.
Any device where you don't control the initial firmware,
and the firmware doesn't support ssh keys. AP/Routers (consumer and commercial and industrial grade), Shared hosting with ssh but limited features (eg GoDaddy)...
For physical devices, you can usually connect them via a dedicated Ethernet cable right to your laptop, and set the initial password. They likely don't have the right network settings anyway to drop them right into the bigger LAN.
Otherwise I think you just prepare a certificate ahead of time, and scp it during the first connection, then immediately disable password-based access, or at least change the password. Any passive eavesdropping still needs to defeat the encryption somehow (no feasible ways are known now), even having seen the initial exchange.
If you have an active MITM attack, all bets are off, because the attacker could even grab the image with the pre-baked key you're sending, and copy or change the key. If this is not possible, then the pre-baked key would help. If your security is really important, don't use ther cheap GoDaddy's offerings with limited SSH.
Does anyone know any SSH clients that support line-buffering of input?
I.e. where what you type doesn't get transmitted until you hit or click return/send?
I had one of these clients (but for telnet) back in more active MUD gaming days but haven't seen it with the few SSH clients I've used since... but always thought that would be a good defense to SSH keystroke timing data leakage, and potentially superior to this 20ms delay approach mentioned in this article, at least for some usage scenarios.
(Although now that I think about it, ideally you might want it to also transmit when someone hits tab so you could still have linux shell autocomplete...)
That would only work if the ssh client could know exactly what was going on in the user session. Like, how would that work if I were editing a file with vim? Or even just typing a command into the shell (where I might need to backtrack and edit the command)?
That's more a choice a current-day shell etc does for you, wanting to control the editing experience. Run `cat` and it'll switch to line buffered mode, note how your arrow keys just input line noise, and watch the cat process with ptrace if you want to confirm it really receives the whole line in one read syscall.
Not exactly what I was looking for in terms of the security side of things but perhaps more sophisticated in terms of the editing handling. Cool, thanks for the reply!
Remote-shell protocols traditionally work by conveying a byte-stream from the server to the client, to be interpreted by the client's terminal. (This includes TELNET, RLOGIN, and SSH.) Mosh works differently and at a different layer. With Mosh, the server and client both maintain a snapshot of the current screen state. The problem becomes one of state-synchronization: getting the client to the most recent server-side screen as efficiently as possible.
This is accomplished using a new protocol called the State Synchronization Protocol, for which Mosh is the first application. SSP runs over UDP, synchronizing the state of any object from one host to another. Datagrams are encrypted and authenticated using AES-128 in OCB3 mode. ...
Roaming with SSP becomes easy: the client sends datagrams to the server with increasing sequence numbers, including a "heartbeat" at least once every three seconds.
...
Instant local echo and line editing
The other major benefit of working at the terminal-emulation layer is that the Mosh client is free to scribble on the local screen without lasting consequence. We use this to implement intelligent local echo. The client runs a predictive model in the background of the server's behavior, hypothesizing that each keystroke will be echoed at the cursor location and that the backspace and left- and right-arrow keys will have their traditional effect. But only when a prediction is confirmed by the server are these effects actually shown to the user. (In addition, by default predictions are only displayed on high-delay connections or during a network “glitch.”) Predictions are done in epochs: when the user does something that might alter the echo behavior — like hit ESC or carriage return or an up- or down-arrow — Mosh goes back into making background predictions until a prediction from the new batch can be confirmed as correct.
Thus, unlike previous attempts at local echo with TELNET and RLOGIN, Mosh's local echo can be used everywhere, even in full-screen programs like emacs and vi.
This reminds me of professional Bridge. They split the teams with a wall and pass their cards through a window at the same time to prevent communication through timing.
As far as actually using bridge in a job interview once. I did. In bridge there's a rule where if your partner gives you a hint not via the bidding, you must take the opposite approach if logically possible. It is called "Active Ethics". I had an interviewer try to lead me by the nose to the answer way too hard, in a debugging interview. So I'd stop and check EVERYTHING I could think of first before doing what he said. I told him I was doing it after the interview, and to look up active ethics if he needed a further explanation.
While I admire your ethics, I feel like a lot of technical job interviews are structured such that you're supposed to actively collaborate with the interviewer. The interviewer is allowed to give you hints or suggestions, and they're very interested in how well the candidate takes hints.
And sometimes the hint can be a trick! I recently did an interview where the interviewer asked if I should use a shortcut to compare two strings, one that assumed there's only one way to normalize a string. I almost fell for it, but then I hesitated and mentioned that I was concerned about some languages where that assumption wouldn't hold. They agreed and were happy that I chose the safer approach.
There's a difference between collaborate and get clubbed over the head with the answer.
This guy was doing the latter, and it was meant to be an interview to test raw debugging/diagnostic skills. If I just followed the breadcrumbs, I'd show no real skill.
In this interview I wasn't concerned about that. If you are looking to see if someone understands Linux by testing diagnostic skill, if they are coming up with 3-4 different failures to check for every step... They are doing their job.
It probably wasn't the situation in your case, but I often give straightforward hints if the candidate is struggling with something that I don't want them to spend time on so we can get to the significant material.
E.g. in an algorithms interview they get stuck on an unrelated python issue (many people interview in python but don't use it day to day), or in a system design interview they get stuck on designing extra-credit subsystem C when they haven't finished subsystems A and B.
If they aren't getting it after a couple hints, I'll just tell them the answer or tell them to come back to it later.
Anyway, I would be very careful if you aren't going where the interviewer is pointing you. If you think it's a trick or you want to practice Active Ethics, then I would call that out in the moment since you might be messing up the flow of the interview at best and come off as hard to work with at worst.
Oh, I know. Attackers will continue to attack. In my opinion, professional bridge is a doomed game. Decades of added steps to prevent cheating complicate too much an already very difficult game, and determined, smart people are still very successful at bypassing them anyway.
I still want to learn to play at a reasonable level though, I'd rather waste my time on bridge than chess. But it needs to be home games, and there's no way I'm going to find the partners when spades and bid whist are out there and easy to learn.
As someone who has played in the Grand National Teams - Flight C. :)
It has problems. Cheating is a huge issue, as is sportsmanship. If you know bridge. I used to play precision with an 11-13 1NT. When people saw our convention card, they'd often ask to swap tables with other teammates. (Clearly not legal.)
When I was playing on a team where all 4 of us played the same convention card those people made me laugh so hard.
Cheaters will cheat. I played clean, I had fun. I haven't had time to play for a while. But man, bridge is a funny little world.
Surely then you're just in a game of bluff with a Sicilian ... ie then you just feel your partner to do the opposite and make sure it's caught, resulting in them taking the action you intended?
Remember, partner has an ETHICAL issue. Partner must work AGAINST you. If they can infer that you might mean something other than what you are signalling, they must take that into account.
I've been in the situation in game a few times. Thankfully, my decisions were pretty cut and dry.
You don't have to do non-obvious things. If you are going to accept any invitation to game... You are going to accept even if partner looks happy, what I wouldn't do is throw out a slam exploring bid if I was on the fence about it.
If I was absolute top of range... I'd go ahead and make the bid. Because there is nothing that would change based on partner's actions.
If people are just expected to be human state machines and are penalized for not doing the prescribed automata, then you might as well flip a coin for the trophy and skip the game.
This is like saying a catcher can't signal to a pitcher.
Information-passing is a human skill that adds a dimension to the game. Let the best win.
Yeah, I lost all interest in Bridge when I found out the people who play it hate 100% of the interesting parts and had outlawed them, and that every time someone comes up with another cool approach, they outlaw that, too.
Initially learning the game it was like “oh wow, that feature of the game has some really cool implications! This is amazing!” but then reading about how real bridge tournaments run, yeah, they crafted the rules to remove every single one of those cool implications.
[EDIT] to be fair, the basic rules would also result in a terrible game as soon as people got too good at exploiting them. I just think they’ve managed to find another way to ruin the game while keeping it technically playable.
The extent to which this just seems to be openly true is startling. Some games, in response to new strategies that are particularly effective, by embracing them and setting aside older approaches. Some games respond by rebalancing and changing rules to keep the game working well. Bridge just bans the strategies themselves (eg, https://en.wikipedia.org/wiki/Strong_pass and https://en.wikipedia.org/wiki/Highly_unusual_method ).
Strong Pass there's very good reasons to outlaw. It is simply too destructive a method to yield an interesting game.
HUMs also tend to end up being very destructive to the opponents, because they really don't understand the full implications of the bid. And may not have discussed how to bid over it. Heck I've run into this with people playing over a strong club system, and they haven't discussed what 2C means.
In the end... many games end up with a few rules to make them interesting. I will not defend the ACBL here, I think the WBC is pretty much on the mark last I watched.
> Information-passing is a human skill that adds a dimension to the game.
Nah. You choose the game that you prefer. You can play the game where you cheat all the time, but don't play it with people who like bridge without asking them first.
Bridge has a built in channel for communication that has very limited bandwidth. The bidding conventions are about maximizing how much you communicate with limited symbols and almost no attempt at secrecy. Effectively it'd turn the game into one where players play with their hands face up, because that's the most effective way to communicate. That doesn't sound very interesting to me.
Wow, looking at this with a red-team cap on, there is so much human "messiness" to exploit here. It shouldn't be too hard too be able to pass a bit or two of information.
It might be interesting for a security person to try to come up with ways to hypothetically assure a trustworthy bridge game, assuming no limits on costs or inconvenience (i.e. if a trustworthy bridge game takes three months to play, or requires launching a satellite into orbit, so be it.)
Bridge is a really weird game. It's all about secret communication with your partner, but it's not allowed to be secret. You can communicate, but no communication! Very odd.
Bridge tournament rules are crafted as if everyone involved wishes they were playing a different game, but are for some reason stuck with the basic rules of Bridge. There’s a pile of rules about how you aren’t allowed to do all kinds of things that the basic rules would enable.
It’s like if baseball couldn’t change field size or mound height or whatever and just had to add lots of rules about how you aren’t allowed to throw too fast or hit too far et c., but kept the physical reality of the game the same.
It seems there is still a possibility for passing information. For example, you can shove the little table across the barrier, or slowly slide it to indicate something. That's how the guy in the upper right passed it the first and second time.
There are endless ways to pass information. Notice the sibling comment about "active ethics." It's the game sort of saying "there's really no fool-proof way to keep you from cheating, so please just be a good person. Even to the point that if you're put into a situation where you could accidentally cheat, you should intentionally play non-optimally."
>This weakness was outlined in a 2001 paper entitled Timing analysis of keystrokes and timing attacks on SSH" [PDF] which looked specifically at the timing-based attack:
>In this paper we study users' keyboard dynamics and show that the timing information of keystrokes does leak information about the key sequences typed. Through more detailed analysis we show that the timing information leaks about 1 bit of information about the content per keystroke pair. Because the entropy of passwords is only 4-8 bits per character, this 1 bit per keystroke pair information can reveal significant information about the content typed.
I thought this was fixed a long time ago and I thought there was a fix pushed around the 2012 time period. I'm totally shocked this has not been previously address.
Some day we have to use packets which are pre-filled by random data to hide our keystrokes in. Not quite steganography, but close. Could also be used to make traffic-analysis harder/impossible even?
The NSA and others have done this for decades. Run the line at full utilization, fully encrypted, and just put data on when you need to. Not too hard, when your lines are dedicated.
You could do steganography with this. There's work on getting a language model to re-word an innocuous cover-text by using a minimum-entropy, key-derived distortion of the probability distribution that is used to sample words. Then, if you use the same model on the receiver side, and have the key, you can decode the covertext back into the ciphertext. This also works with images, too. https://openreview.net/forum?id=HQ67mj5rJdR
Reminds me of numbers stations. Constantly broadcasting numbers around the world that mean something to someone . . . whenever they happen to mean something to someone. With full knowledge that the world's intelligence services (among others) are constantly listening too.
This makes me wonder about newer terminal emulators on maccOS like Warp[1], and if they're for example taking all input locally, and then sending it over the remote host in a single blob or not? I imagine doing so would possibly break any sort of raw-mode input being done on remote host but I'd also imagine that is a detectable situation in which you could switch into a raw keystroke feed as well.
In general once you’re connecting over SSH the connection itself is always in raw mode and then the remote host deals with its pty normally (which can be in line or raw mode). Terminals with special shell integrations usually need them installed on the remote host too (some have support that does that somewhat transparently though).
This is why mosh can have better behaviour than pure SSH over high latency connections. However this feature isn’t going to apply to mosh.
I wonder if SSH can honor line-buffered mode. It should be able to detect it, but then if it incorrectly switches to line buffering then random stuff might deadlock.
It's really hard for me to imagine that an app that markets "AI for your terminal" is going to be "more secure and private" than some standard Unix tool.
Perhaps some very specific example of a security feature (such as protecting against timing attacks) could be protected against in a new tool, and not in the older more standard one. But it seems far more likely that many other security features would get forgotten in the newer tool, and by adding "AI" so many more attack vectors would be added.
It's honestly hard to even believe in the privacy claims of warp. Almost all NLP tools in today's age seem to fall towards cloud solutions, which almost immediately makes that likelihood of privacy close to nil.
An eavesdropper cannot see the content of your keystrokes, but (previous to this feature) they could see when each keystroke is sent. If you know the target's typing patterns, you could use that data to recover their content. You could collect the target's typing patterns by getting them to type into a website you control with a Javascript enabled browsers, or from an audio recording of their typing. (Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards).
> Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards
do you have any sources for that?
I've only seen this mentioned from research results recently but no real world exploitation reports.
Years ago when I saw a paper on that topic, I tried recording my own keyboard and trained a ML model to classify keystrokes. I used a SVM, to give you an idea of how long ago this was.
I got to 90% accuracy extremely quickly. The "guessed" keystrokes had errors but they were close enough to tell exactly what I was typing.
If I could do that as an amateur in a few hours of coding with no advanced signal processing and with the first SVM architecture I tried, it must be relatively easy to learn / classify.
Also, if the goal was to guess a password you wouldn't necessarily need it to be really accurate. Just narrowing the search space could get you close enough that a brute force attack could do the rest.
It's quite good at decoding my own typing, although I am a quite aggressive typist and that may help. I haven't tried it on others, though (honest, officer).
I didnt find an article about actual hacks carried out with that technique, but here’s a HN discussion [1] from this month about a paper on the topic.
From that discussion it sounds like you need to train on data captured from the actual target. Same physical keyboard in the same physical space with the same typer.
Pretty wild despite those specific conditions. Very interested to know if people have actually been attacked in the wild with this and if the attackers were able to generalize it down to just make and model of a keyboard, or if they could gather enough data from a stream.
IIRC there is at least one paper, maybe around 2005, where they were able to determine what was being typed in an encrypted ssh session, using packet timings correlated to collected human typing statistics. Looks like this adds noise to prevent that.
Basically you can analyze typing speed to make some assumptions
For example, since users tend to type their passwords quicker than other things, you could see how many keystrokes were sent in a burst and guess the user's password length when they sudo something.
> Latency, particularly unpredictable latency, is one of the greatest stressors in software development work.
It took me a second, but I'm pretty sure the comment above is referring to latency in the user experience; namely, the delay between a keypress and perceived result. [1]
FWIW, tools like Mosh [2] go a long way towards reducing perceived latency. Mosh displays the user keypress as soon as it is registered locally (which happens without a perceptable delay). To indicate that it has not round-tripped, the character is shown in a washed-out color, last I checked. (Or maybe underlined?) After the round-trip completes, the character is displayed normally.
[1] If you greatest stressor in software development is the latency of your keypresses, you sound very lucky to me.
I know some people do network monitoring for hands-on-keyboard shells (presumably) by measuring packet timing, I wonder if this will mess with those detections and if so by how much.
I hope that kind of thing goes the way of other corporate efforts to break/backdoor encryption for the sake of "security". IMO, it's really the wrong way to go about security. Sure it would be nice to know if some automated script is being used to log into a machine, but better design can mean that information isn't important.
This has nothing to do with breaking encryption and of all the sketchy corporate surveillance tooling that's deployed for security purposes (so say nothing of HR purposes) monitoring for shells on the network seems about as benign as it comes.
It's only benign if we don't see new policies that say "everyone must disable keystroke obfuscation so we can still spy on traffic".
If a company's security strategy relies on the ability to tell if a given stream of encrypted bytes is shell traffic, and that it can be fooled by timing obfuscation, they need a better strategy. Attackers won't care to follow a "no timing obfuscation" policy.
I've definitely encountered security teams that thrash between different broken policies. For instance, one employer simultaneously had these two policies:
- All developer laptops must be able to log into prod
- You must type a 2FA pin each time you access the test environment, and that includes nightly automation scripts.
I imagine they'd love to run a thing that detected and blocked scripted access to the test environment, but allowed it in production.
(In case it isn't obvious, I agree that corporate security teams shouldn't use strange network monitoring heuristics to interfere with common engineering and ops workflows.)
Network monitoring for unauthorized/unusual access, reading more into how this works I don't think this would actually change anything, you can probably still discern scripted vs manual shells it would just be a bit harder.
An empty IPv4 packet is 20 bytes, and an empty IPv6 packet is 40 bytes. An empty TCP header is 20 bytes. Therefore, if you want to send a single byte over TCP, you need 41 bytes over TCP/IPv4, or 61 bytes over TCP/IPv6.
Let's call that 64 bytes/packet for a small packet.
For comparison, a copper-wire non-broadband modem in the early '90s ran at 33.6kbps (kilobits/sec) which worked out at 4.1KiB/s. So a packet every 20ms wouldn't even saturate 30-year old modem tech. And believe me, that was slooooooow!
I went from 2400 to 14.4; 9600 was the limit before trellis modulation, but IIRC it jumped from 14.4 to 33.6 rather quickly.
[edit]
After some quick googling, 33.6 wasn't standardized until 1996 (compare to 14.4 in 1991), but the manufacturers released modems ahead of the V.34 standard with DSPs so that they could be upgraded to the standard when it was available.
14.4 did catch on almost overnight in the early 90s though as the modems were no more expensive (and sometimes cheaper) than slower modems.
I had an Atari 800, with an MPP-1000c modem. Those babies could, when connected to another modem of the same model, push the speed up to 450 baud. They were odd devices, connecting to the computer through one of its joystick ports.
The 56k was only in one direction, made possible by having the ISP modem on an ISDN PRI. In that configuration the only ADC in the fast direction is the high resolution one in the modem.
But slow forced people to use their brains. Around 2002 I did WFH using some 30 kbit/s practical speed. My X11 desktop was shared pixel-accurate with decent response times over VNC.
20 years later if someone shares their code over Google Meet I see some blurred stuff. And red font takes 3 seconds to become clear.
I didn't read the code, but as I understood it, it was more like a frame rule ("imagine a bus stop..."), where your keystrokes will be delayed/buffered for a few milliseconds and then sent in regular 20ms interval bursts
TCP has an overhead of 20 bytes. I'm not sure how much openssh adds, but if it's just a keystroke I can't imagine it'd be over 64 bytes. Add those together and multiply by 50 packets per second (20ms between each packet), and it works out to a whopping 4.2kB/s.
Does mosh do something similar? It seems like that'd be way more effective in a protocol that's already much more tolerant of random latency spikes already.
Though I'm curious how does the project keep working with CVS in $year, I wonder if everybody just uses git cvsimport and just forgets about CVS most of the time
All of these reasons boil down to "if it ain't broke" and "that's what we're used to".
Switching VCS for a project of this size is always complicated and OpenBSD devs are famously "old school" and conservative with their software choices.
I used to use CVS before switching to SVN and later DCVS like Mercurial and Git. The claim that "it is unlikely that any alternative versioning system would improve the developer's productivity or quality" is absolutely laughable IMO.
This is especially true nowadays where CVS support in modern tooling, if it even exists, is usually legacy and poorly maintained.
> All of these reasons boil down to "if it ain't broke" and "that's what we're used to".
"Works for us". Which is a pretty good argument.
> The claim that "it is unlikely that any alternative versioning system would improve the developer's productivity or quality" is absolutely laughable IMO.
Why is it laughable exactly? I mean for me I can't use CVS due to the lack of atomic directory checkins, but if they don't need them or they have already a system in place which may even better tie with their development/release style than any generic VCS could, why bother?
> This is especially true nowadays where CVS support in modern tooling, if it even exists, is usually legacy and poorly maintained.
The explanation all makes sense. But the key line of “we all know cvs” is effectively exclusionary to all the other developers in the world who don’t use cvs. At some point they will need new talent which will be harder to get.
If you know git or any other version control then using CVS really isn't that hard; many commands are similar.
And everything is exclusionary to someone. Pure git email workflow? Exclusionary to people who find it hard/difficult, or use email in a different way (e.g. only gmail web UI). GitHub Pull Request workflow? Exclusionary to people who don't want to run non-free JS, or don't want to use "Micro$oft GitHub", or don't like using web interfaces.
Accusing on of the first pioneers of Open Source as being "exclusionary" has got to be a joke.
In many ways, they were there first.
I really don't understand why there's such a tendency to demand "monolithic social networks" even in open source software development. Connecting to people is great when feelings are mutual, but we don't even have a right to be left alone without being accused of being anti-social?
Based on that rationale, anyone using Typescript is being exclusionary to developers who don't know Typescript.
They picked a system that suits the projects work flow, is well documented and a relatively low learning curve for anyone interested. I doubt cvs would be the main turnoff for someone looking to be an OpenBSD developer.
That argument would be more analogous if you picked say CoffeeScript. The point is it’s something that used to be reasonably popular but for reasons the vast majority of the world has moved on from.
CVS isn't "hard to learn", devs worthy enough to make meaningful contributions to OpenBSD can probably make sense of it in less than a day. It's just... extremely anti-ergonomic given the other options we have today.
It's a fairly small project and doesn't have too many costs, relatively speaking.
Previously the main income was from selling CD sets (they intentionally limited the web download options), but they stopped doing that about 15 years ago orso.
I recommend you do set up key authentication. You'll get more convenient logins and better security. This page should document how to do it: https://www.ssh.com/academy/ssh/copy-id
A central aim of of SSH is confidentiality. There's a lot besides passwords that you can deduce with traffic analysis, especially if you can correlate with other observed events.
A service may provision an account with a provided ssh public key, so that you never log in with a password, even once.
It's sort of a chicken-egg problem though, presumably you do have a password somewhere along the line, such as in a portal where you created your account and uploaded your public key.
I'd say there are more valuable things you can do to improve security than solving the problem of "having to ssh in with a password one time to upload a key"
Maybe. Not having a password on the server eliminates all the risks associated with weak or leaked passwords. And then you can configure SSH to reject password logins altogether. It's not an insignificant benefit.
I'd say there are more valuable things you can do to improve security than solving the problem of "having to ssh in with a password one time to upload a key, then updating the config to reject password logins".
SSH password login is secure. Keys are preferred since you can't have asdf1234 as a key, but if you as the initial person to set up the server are the only one allowed password login and use a decent password, you're fine
...Key-distribution is to encryption systems as cache-invalidation is to computer science. Both of which are subforms of the ur-problem of signal-propagation which itself is stemmed from the physical principle of causality.
Only way through it is to shut up and do it, sadly.
The implementation details of doing it are often either A) have physical possesion of computer, and do initial insecure setup within a "secure realm" you control, or B) redefine your "secure realm" to include the hardware being in someone else's possession, and do what they tell you and pray they are trustworthy.
something-you-know auth is generally less work than something-you-have auth, since you need to ensure you always have the key handy whenever you would want to log on.
When I started reading this sentence I thought you had them backwards because I was thinking "something I have" as being a public/private key pair for an arguable definition of "have", so when I hit your comma, the confusion was fixed. But now I'm not so sure I was wrong.
I hate having to type my password multiple times in the morning for work, but only partly because of the 2fa on my phone that goes with it. If my computer could just detect my phone being nearby (indicating my presence) that would be great. Then something-I-have would actually out-convenient something-I-know.
Don't push button start cars kinda do this with the key fob? Why are computers lagging behind cars in tech? Usually the other way around.
Most terminal applications use buffered I/O for password entry, which is still an important security feature. In that mode, nothing is sent to the other end until the user presses return. A MiTM only "sees" one packet no matter what, and with padding they can't even infer the password length.
For a time, there was rich pickings in applications that accepted passwords in unbuffered mode. Many of them doing it so that they could echo "*" symbols, character by character, as the user typed. That simple feature looks cool, and does give the user feedback ... but would leak the keystroke rate, which is the last thing you want on password entry.
I hope we preserve buffered I/O for password entry, because it's still better than what ssh can do with some obfuscation. But it's great to see ssh add this, and will help protect content that can't be buffered, like shell and editor input.