Hacker News new | past | comments | ask | show | jobs | submit login

What is the threat that this mitigates?



An eavesdropper cannot see the content of your keystrokes, but (previous to this feature) they could see when each keystroke is sent. If you know the target's typing patterns, you could use that data to recover their content. You could collect the target's typing patterns by getting them to type into a website you control with a Javascript enabled browsers, or from an audio recording of their typing. (Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards).


> Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards

do you have any sources for that?

I've only seen this mentioned from research results recently but no real world exploitation reports.

https://www.bleepingcomputer.com/news/security/new-acoustic-...


Years ago when I saw a paper on that topic, I tried recording my own keyboard and trained a ML model to classify keystrokes. I used a SVM, to give you an idea of how long ago this was.

I got to 90% accuracy extremely quickly. The "guessed" keystrokes had errors but they were close enough to tell exactly what I was typing.

If I could do that as an amateur in a few hours of coding with no advanced signal processing and with the first SVM architecture I tried, it must be relatively easy to learn / classify.


Also, if the goal was to guess a password you wouldn't necessarily need it to be really accurate. Just narrowing the search space could get you close enough that a brute force attack could do the rest.


https://github.com/ggerganov/kbd-audio

It's quite good at decoding my own typing, although I am a quite aggressive typist and that may help. I haven't tried it on others, though (honest, officer).


I gave that a bunch of tries over the last half an hour with longer and longer training data and it never got better than random chance.


I didnt find an article about actual hacks carried out with that technique, but here’s a HN discussion [1] from this month about a paper on the topic.

From that discussion it sounds like you need to train on data captured from the actual target. Same physical keyboard in the same physical space with the same typer.

Pretty wild despite those specific conditions. Very interested to know if people have actually been attacked in the wild with this and if the attackers were able to generalize it down to just make and model of a keyboard, or if they could gather enough data from a stream.

[1]: https://news.ycombinator.com/item?id=37013704


IIRC there is at least one paper, maybe around 2005, where they were able to determine what was being typed in an encrypted ssh session, using packet timings correlated to collected human typing statistics. Looks like this adds noise to prevent that.


Alternatively, use the SSH compression option that works on blocks of data


The original exploit concern was the use of the "Viterbi Algorithm."

http://www.cs.berkeley.edu/~dawnsong/papers/ssh-timing.pdf [2001]

The addition of ML has greatly improved the accuracy of audio decoding - use a silent keyboard in any insecure physical locale.

https://arstechnica.com/gadgets/2023/08/type-softly-research...


Basically you can analyze typing speed to make some assumptions

For example, since users tend to type their passwords quicker than other things, you could see how many keystrokes were sent in a burst and guess the user's password length when they sudo something.


A paper came out recently that uses keystroke timings+deep learning to fingerprint users (and authenticate them in this case): https://www.usenix.org/system/files/usenixsecurity23-piet.pd...

In this specific paper's use case, it's not a security threat, but you can definitely cast it as information leakage.


The timing of key stokes leaks information. Here's the 2001 paper that describes the problem:

https://www.usenix.org/legacy/events/sec01/full_papers/song/...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: