Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What happens when you press a key in your terminal? (jvns.ca)
408 points by robenkleene on July 21, 2022 | hide | past | favorite | 102 comments


> I believe the reason cat gets interrupted when we press Ctrl+C is that the Linux kernel on the server side receives this \x03 character, recognizes that it means “interrupt”, and then sends a SIGINT to the process that owns the pseudoterminal’s process group. So it’s handled in the kernel and not in userspace.

Also interestingly, the use of \x03 for this purpose is a default, but it's not hardcoded. You can change it with the stty command.

For example, if you run

stty intr '^X'

then your interrupt character sequence will be Ctrl+X instead of Ctrl+C (!).

In order to make this change, the stty program actually has to call into the kernel with an ioctl call (TCSETS or a related ioctl, for "terminal control set settings").

You can learn more about this ioctl on Linux in the ioctl_tty(2) man page, and you can see the various settings that can be changed this way with

stty -a

(It's a little bit confusing which things are handled by the readline library and which things are handled by the kernel, as the kernel's ability to support some rudimentary line-editing features on an interactive terminal long predates libraries like readline. I think readline might actually disable kernel interpretation of some of these control characters when it starts accepting input, and then re-enable them when it stops, but I've never looked into that.)


> I think readline might actually disable kernel interpretation of some of these control characters when it starts accepting input, and then re-enable them when it stops, but I've never looked into that.

That's the canonical mode bit (ICANON). Canonical mode means the kernel/terminal is line-buffered and line-editing is handled by it; non-canonical mode means user input is pushed to the application immediately.


> For example, if you run

> stty intr '^X'

> then your interrupt character sequence will be Ctrl+X instead of Ctrl+C (!).

I have a feeling that if you actually do this in a real Linux system, a sysadmin will hunt you down and dismember you alive, but I might be wrong.


Nope. As someone who administered a VAX with 50 simultaneous users, it's not a problem. It only affect that process and that user during that session. A logout resets everything. A more interesting problem was trying to read or write to a serial port that was hardwired to use a different baud rate.

You could type "stty 9600 /dev/tty4"; cat file >/dev/tty4" and it wouldn't work because when stty exited, the system would reset the terminal baud rate.

The proper way to do this (assuming you weren't the sysadmin and couldn't modify the default per-terminal baud rate), was to type the following

(stty 9600; cat file)>/dev/tty4


Fun fact: it doesn't have to be Ctrl+something. If you feel paricularly evil today, you might try this:

  stty intr y


It can be any single byte, except probably not 0x00.

It cannot be a multi-byte sequence, which means it cannot be any keyboard key.

On SCO systems intr/break is the Del key, which on a scoansi terminal emits ^? (I don't remember the ascii value just that ctrl-? is another way to produce it) So to break out of programs is the Del key instead of Ctrl-C.

But on a vtxx terminal like the linux console or xterm, even if you were perverse enough to want to, you can not assign break/intr to the Del key like that, because on a vtxx terminal the Del key emits a multi byte escape sequence, not a single byte.

hot keys like ctrl-c require multiple fingers, but what's produced is a single byte.

(you can actually modify both the console and xterm to change what a key emits, but then the resulting terminal no longer matches the definition of a linux or xterm terminal)


> Del key, which on a scoansi terminal emits ^?

^? = ASCII DEL[elete] = 0x7F. Terminals where the Delete key sends Delete are doing it right.

Why is DEL 0x7F, when other control codes are <0x20? Because the American Standard Code for Information Interchange descended from teletype codes, and teletypes often used paper tape ‘storage’ where a 1 bit was a hole. So teletype codes would normally have a delete function punch all holes, because that would obliterate any other possible character (and typical of punches, advance to the next position, making DEL semantically a forward delete operation).

> on a vtxx terminal the Del key emits a multi byte escape sequence

Only VTxxx where xxx ≥ 200. The VT100 series and earlier had ASCII Delete and Backspace keys, but in the VT2xx era DEC got some funny ideas and provided only a ⌫ key, which left us an enduring mess.


> It can be any single byte, except probably not 0x00.

At least on Linux, indeed you can't use 0x00 (aka ^@), because _POSIX_VDISABLE (the thing you use for disabling special characters) is 0.


^? = 0x7F


Is it fun enough to be 1st April joke?


joke aside, it is a per-tty setting, right?


Sure, which is why you put it in /etc/profile so you don't have to remember to run the command all the time!


Yes.


Can confirm.


Anyone interested in the machinations of all of this terminal stuff should look at antirez’ kilo, a terminal text editor in under 1000 lines of code: https://github.com/antirez/kilo

There is a nice tutorial that walks through how one might write it from scratch: https://viewsourcecode.org/snaptoken/kilo/


The timing of HN can be spooky sometimes. Earlier this week I was just wondering it I could switch the ctrl-c interrupt to a single key for easier keyboard smashing to stop an errant poorly thought out script. My choice would have been the spacebar as that is second nature muscle memory from all of my days in an edit bay where the spacebar acted as the All Stop and was engraved with the phrase Awww Shit! as that was the typical utterance from the editor just before using the key


stty is useful when an interactive program died before it had a chance to restore the terminal mode so you end up with nothing displayed when you type - `stty sane` will fix that up.


> The client sends l and then immediately receives an l sent back. I guess the idea here is that the client is really dumb – it doesn’t know that when I type an l, I want an l to be echoed back to the screen. It has to be told explicitly by the server process to display it.

Of course the client may be really dumb! Here is a 1930s teletype used as a Linux terminal:

https://youtu.be/2XLZ4Z8LpEE?t=656

If you would not do it that way, you would run into all kinds of synchronicity problems. How could you be sure that, after rapidly making 50 keystrokes, every keystroke was received by the other end, in that order?


> How could you be sure that, after rapidly making 50 keystrokes, every keystroke was received by the other end, in that order?

Exactly. I implemented a client that emulated a VT100 early in my career, and this is a real problem. There are various strategies you can use but by far the simplest and safest seemed to just be the echo and for the client to always display exactly what it receives[1].

There's nothing worse than typing out a command that you realize is wrong and potentially destructive, only to Ctrl+U it and have the client kill the line but the server didn't get the instruction, so when you press enter it runs the evil command. If the command doesn't echo anything you may not even know! I once accidented a space in the path I was deleting when (recklessly with -rf) trying to remove my ~/bin directory, like this:

    rm -rf ~ /bin
Good Lord that was a bad day. Thankfully I still had the installation disc to restore /bin, and a relatively recent backup of my home directory to restore that. I lost a few days of uncommitted code, but that felt like a trifle compared to what it could have been :-)

[1]: I love how mosh[2] handles this to get the best of both worlds. It will smartly show you what you typed, but sill underline it until it actually receives the echo from the server, so you can type and feel like there's no delay between bytes, but still be confident that client state matches the server state.

[2]: https://mosh.org/


More importantly, how do you know whether the keystrokes should be displayed at all? For example, take vi. When it starts up, it sets cc.c_lflag &= ~ECHO (basically like “stty -echo”) so that normal mode commands aren’t printed onscreen. The kernel knows whether ECHO is set, which is why it handles echoing.


This was definitely a problem in the dialup modem days before error correction, where line noise might add extra characters, or modify what you were sending.


Look at the ascii chart by bit pattern and the characters will suddenly make sense. Here’s an old chart from the 60s that explains it:

https://programesecure.com/ascii-values-table-generator-in-c...

Unfortunately if you just see the character codes with the decimal, hex, and octal next to them this logic is obscured. Remember, it had to be implementable in (mechanical!) hardware.


> ...Look at the ascii chart by bit pattern and the characters will suddenly make sense.

Could you elaborate on what does the bit pattern reveal?

For example, I understand that Ctrl-C generates 0x30, which stands for ETX (end of transmission), but what is it there in 011?


I don't know if the chart helps, but

    #define CTRL(x) ((x) ^ 0100)
    // e.g. 'C' ^ 0100 is 3
    // e.g. '@' ^ 0100 is 0
    // e.g. 'M' ^ 0100 is 13 a.k.a. \r a.k.a. enter
I like this explanation better. It also explains what notation like ^C means. It's shorthand for 0100^C.


Expanding your overloaded operator:

Crtl-C is shorthand for 0100 xor ‘C’

Also, the 0 prefix means this is a base-8 (octal) number


Ctrl clears the highest two bits. E.g. @ (ascii 0x40) turns into ^@/NUL (ascii 0x00). Similarly C (0x43) -> ^C/ETX (0x03).

Older terminals (like the VT05) used bit-pairing also with shift. They just flipped some bits depending on the upper two bits. Compare column 2 with 3 in the linked graphic, and 4,5 with 6,7.


Closer to toggling in the case of ^?.


good point!


Just run your eye horizontally to see that toggling only a couple of bits move you between ^C, C, and c (back when ASCII was formalized few terminals supported lower case, and six bit character sets were common).


> Here’s an old chart from the 60s that explains it

eh, the article linked does a very poor job of an explanation of ASCII

And no wonder, it's a deep subject - if you want answers the you want to watch this talk: https://m.youtube.com/watch?v=_mZBa3sqTrI


That site didn’t let me link straight to the chart. I’ve seen that chart for years (decades) but that was the only one I could quickly find on the web.


And this, kids, is why it was possible to do fun things to computers of other people when their os did not tell the modem to stop accepting incoming bytes when the line was dropped and then picked up again. It just accepted bytes from wherever they might come…


I was imagining this would be an overview of the circuit closing upon the key press and the chips that translate it to the signal that traverses to the the USB connector and goes through a controller chip etc... From the mechanical closing of the circuit to the illumination at the display.

Maybe I should give that a go in some classic Tracy Kidder style. I certainly can't fill in all those steps. I'd have to do some learning myself


It's actually a useful interview question in some positions, like embedded.


I've gotten it before and I had no idea <how> to answer.

This is characteristic of one of the common interview traps I fall in. I don't know who my audience is or what kind of answer they're looking for.

I can answer the software world version of that all day along with internationalization and the history. But hardware? No not really


A slightly different exploration:

https://youtu.be/XUdxXON27xA


I think the canonical resource on this is https://www.linusakesson.net/programming/tty/


Julia Evans is such a treasure. I’ve learned a tonne reading her stuff.


This tells what happens when the virtual terminal receives some input. You can pass that input from a file and it would do the same.

There's also a whole lot that happens when you press a key to generate said input (and I don't even mean at the hardware level), which is perhaps much less known.


And things turn really "fun" when you begin using modifiers which aren't supported by the USB keyboards specs, like the Hyper key. I've got my Linux / X configured to use Super and Hyper keys but things quickly turn weird. For example the Hyper key works totally fine from Emacs in GUI mode (and, over the years, I assigned a huge number of shortcuts to the Hyper key), since years and years, but I have never spent the time to make it work in the terminal. It's doable but requires some arcane magic. I'm not even talking about having the Hyper key working from a tty (like if I boot in non graphical mode) but simply having it work from, say, an xterm under X. Oh the fun.

So until I fix that no "emacs -nw" (emacs in terminal mode) for me as I rely way too much on my Hyper key.


You dont need to do anything fun. Just convince emacs developers to support a modern terminal keyboard protocol. https://sw.kovidgoyal.net/kitty/keyboard-protocol/ and you get support for hyper out of the box.


It's more than just the Hyper key (as I'm sure you're aware). The article mentions how C-S-anything is the same as C-anything. I've always wanted to set up a fully functional terminal Emacs (for playing through ssh on a tethered connection in a café) but it seems to be a fairly in-depth process to have an interface with all the things. I use QMK extensively in tandem with my Emacs configuration, and there is a ton of functionality to transpose to the terminal. Ultimately I think I would have to completely redo my configuration in both QMK/KMonad and Emacs to stick to the codes that are sent correctly.

Sending "F18 a" with QMK or KMonad (a random prefix-combo I picked for example purposes) instead of "H-whatever" (for whatever keystroke combo you have H-whatever bound) would work with a terminal in your case, but you'd have to change all those bindings and setup QMK/KMonad accordingly. That's altogether too much work.


Another fun modifier is the Office key, which actually sends Ctrl+Alt+Shift+Win, so if you can manage to hold down all the keys Ctrl+Alt+Shift+Win+L will open LinkedIn on Windows laptops.


This is why i can’t pass simple interviews. Whenever they ask this question I start talking about how the keyboard works, or, once, how my teletype model 33 works.


The point of echoing back what was received is not because the client is "really dumb", it's actually good engineering practise. It directly confirms to the user what the machine believes it has received.

If this is incorrect, (whether caused by line noise or whatever), then the user knows this immediately. Unix has a lot of 2-character commands - you wouldn't want an innocent command accidentally mistranslated to "rm" and then press <Return>.


It also means the server can turn off echo for sensitive things like passwords.


And, more importantly, for programs like vi and Emacs that want to handle rendering themselves.


Yes, state is only modified on the server, not in the dumb clients. Weakening this fundamental idea is what makes it hard to write modern single page applications.


2004h -> “Turn on bracketed paste mode”

2004l -> “Turn off bracketed paste mode”

https://en.m.wikipedia.org/wiki/ANSI_escape_code#CSIsection


Same mechanism that we could use to disambiguate keys like ^H and backspace.

http://www.leonerd.org.uk/hacks/fixterms/



Not to mention that it was NIH after xterm already had such a mode.


Fascinating, thank you for informing me.


It's a valuable exercise to write a web shell service with Go standard library (rather than goterm). It shouldn't take more than 100-200 LOC and you will learn a lot about how SSH protocol, streaming IO, websockets work.


There should be a body of these "koans", little program specs to implement to explore various technologies.


ah, it's also a great deal of a distraction if your goal was to see what gets sent when presing keys


On the contrary, you don't have to do much more work than the author already did, and you get deep practical insights into how this very useful technology works beyond the marginally useful trivia of how terminals happen to internally represent keystrokes


So I didn't know that when I press backspace in the terminal, "x08" is being sent, not "^H".

What is the purpose of caret notation then? Is it just for human readability? e.g. my terminal shows ^H sometimes when I press backspace


> So I didn't know that when I press backspace in the terminal, "x08" is being sent, not "^H".

"0x08" is "^H". H is the eighth letter of the alphabet. There is no difference between those except for the notation. In the ASCII character set, the first 32 characters are called control characters. This is why many keyboards have ^ on their control keys. The 26 control codes 1 through 0x1A correspond to ^A through ^Z.

PS This relates to the 7-bit ASCII character set, American Standard Code for Information Interchange, composed of control characters for communication control, letters, numbers, and punctuation.


Pretty sure it's for the human, yes; how else would you represent "hey human, something just echoed the thing you get from ctrl-h"? (Keeping in mind that this almost certainly predates unicode and fancy fonts, and probably colors in the terminal. I'm open to the idea that we could do better today, but it's hard to overcome 50 years of tradition.)


Usually that’s rendered as one control unit not as caret + H. The caret indicates an escape character. You can typically type in escape sequences via ctrl v (eg ctrl-v + tab for the tab character instead of tab completion)


There's a command for doing the diagnosis goterm was used for here, it's called script(1).


The picture of the VT100 brings back memories. I had one at home and would connect to our Vax 11/780 to program in FORTRAN using the EDT editor... using a 1200 baud modem.

(about 1981 or so)


I shared a flat in London with two other geeks in the 90s and we wired up vt100 terminals in each of our rooms so we could talk to each other without leaving our rooms.

They didn't last long, they were quickly replaced by PCs on a lan so we could play Doom.


I wish I had kept my amber terminal. Great for working late at night, very focused. Having a modern display projecting amber colours just isn’t the same.


It is amazing how the understanding of what happens in our computers goes from : the air we breath to a mystery.

Although if you asked one of my 900 coworkers in 1981 who wrote flight simulation software in Fortran, how the VT100 terminals interacted with the computers they were connected to,only a few dozen would know.

One of my projects at the time was to create a library to support the creation of 'screen oriented' applications without knowing the escape sequences.


Have anyone attempted to connect a serial terminal over bluetooth? At some point I thought that could be a neat idea because it would be "internet secure". I don't remember the circumstances exactly, but I never got it to work fully, there were some echoing of characters that I never figured out how to configure away.


As a kid my mom would take my sister and I to the big Pasadena Public Library which had these DEC terminals all over the place for using the card catalog to find books. I was fascinated by them and would often just sit and play hacker. Might even be a big reason for why I am a programmer today.


I don’t know Julia Evans, certainly not her academic background.

But she blogs eloquently from the standpoint of someone learning things from first principles, in an accessible, actionable way. As a self-taught person this is near and dear to me. She’s like the 3Blue1Brown of systems programming, and there’s not much higher praise.

Striking the balance between rigor and practical applicability is tough, especially while assuming little prior knowledge.

Keep up the good work @jvns!


She was a Staff Engineer at Stripe. She wrote a great primer on SQL too.

https://jvns.ca/ is a treasure trove


Stripe is an interesting place. It’s clearly a great business, but they’ve let that bleed over a bit into thinking their p95 people are Google p95, which is trivially silly.

It’s an awesome company, and they’ve got solid folks, but “Staff” at Stripe isn’t what makes Julia cool: Julia’s work with Recurse is way cooler.

I mean we’ve got Consul: https://stripe.com/blog/service-discovery-at-stripe

And then etcd: https://stripe.com/blog/operating-kubernetes

Both courtesy of Julia incidentally.

Chubby has worked since like 2003. We’re just talking a different level of ball game.


This also partly explains why a lot of keystrokes in nano do the same thing and cannot be bound separately (see attached file keystrokes.nanorc in https://savannah.gnu.org/bugs/index.php?61699 )


Next step is to explain how ctrl+c work over ssh , it is my favorite system engineering interview question.


The ssh client on your local end receives SIGINT and processes it by sending a special kind of packet over the ssh session to the other side; the sshd on the remote side receives this special packet and processes it by sending SIGINT to whatever command it has originally spawned.

IIRC telnet instead uses urgent TCP packets to indicate SIGINT.


My guess: The ssh client on your local end receives a 0x03 byte directly from the terminal, because it has disabled ICANON and ISIG and whatnot, and forwards it to the remote connection. The remote sshd then feeds the 0x03 byte to the pseudoterminal it has setup, and then the (remote) kernel may or may not interpret that as SIGINT. For example, ISIG could be disabled (“stty -isig”) or it might be a different key (e.g. “stty intr '^X'”, as mentioned elsewhere in this thread).

Your setup fails to distinguish keyboard interrupts (intended for the remote machine) and real SIGINTs generated by kill(1). It also uses the local termios(4) settings instead of the remote ones.


You may actually be right; I know that SSH channel protocol has special message kind specifically for sending signals to the remote process, which is different from channel messages with normal data, but I don't know if it's used by actual ssh client implementations. They may simply just put local tty in raw mode and forward all input from it as normal data.

Hard to say without looking at the actual code, and right now I am not particularly in a mood for reading C sources at the moment. Maybe someone else is and will tell us the true story!

P.S. Now that I think of it, ssh implementations have to "sync" local and remote tty parameters or at least make it look sane for the user: if you resize your local xterm, arguably the remote e.g. vi should get notified, but what if it's the remote process that changes the terminal dimensions, should your local xterm get resized as a result?


> P.S. Now that I think of it, ssh implementations have to "sync" local and remote tty parameters or at least make it look sane for the user

Hadn’t thought about this before, but I think only window size needs to be synced (maybe baud rate and parity? I really have no idea how those would work)

> if you resize your local xterm, arguably the remote e.g. vi should get notified, but what if it's the remote process that changes the terminal dimensions, should your local xterm get resized as a result?

Huh, TIL processes other than the pty master (the terminal emulator usually) can change window size. Glad I checked my sources before writing off a comment …


Other fun with control characters: The article caught that Alt is ESC, but didn't catch that Ctrl+[ is too. Check it out in a program that uses control codes, like Emacs or Vi. I myself find it somewhat more ergonomic depending on the action.


> remote terminals are very old technology

TTYs or teletypewriters have been in use since the 19th century. I'd love to see a blog post that talks more about the early history.


She wrote that “everyone had one” but really it meant “everyone online at a given time” was using one. In a lot of facilities they were shared.


Is the "ctrl + E" shortcut to jump to end of line in a terminal interpreted/executed by the terminal or bash?


Note that, although bash handles this case, the terminal never interprets ^C or backspace or ^U or anything like that. That’s all in the tty driver — see OpenBSD’s termios(4) for information on how to configure it.

In general, “smart” programs like bash or vi or fzf or readline configure the termios state so that the tty driver doesn’t handle any keys. This gives them more control. When they exit, they restore the termios to the original state, so that you can still backspace in dumb programs like cat and grep.

So you might have a dance like this when you run vi:

- bash restores its saved termios, so that whatever program it’s running starts with a blank slate

- vi saves the original termios

- vi switches the termios into “raw mode” (simplification)

- you edit text …

- vi switches back to the state it saved

- vi exits

- bash saves the termios state

- bash switches to raw mode


Bash, via the GNU Readline library.


So following the examples in the article, inbetween the sent and recv, it's actually bash interpreting what the sent command is, not the terminal?

For example, the ls command example from the article:

sent: "l" recv: "l" sent: "s" recv: "s" sent: "\r" recv: "\r\n\x1b[?2004l\r" recv: "file\r\n" recv: "\x1b[?2004hbork@kiwi:/play$ "

The interpretation and processing of "\r" to return the final output is actually bash processing this, not the terminal?


"\r", "carriage return", is what the return/enter key sends (either that or "\n", it's configurable).

So what's being sent from the terminal to bash here is "ls" (which is echoed back) and then the return/enter key, which bash interprets as "run the command".

So it sends "\r\n" to the terminal (this is "recv" in that notation), which moves the cursor to the beginning of the line and then to a new line to get the cursor off of the prompt line, and then "\x1b[?2004l", which is the sequence to turn off bracketed paste.

Then ls runs and prints "file\r\n", which is the filename "file" on its own line.

Then bash takes over again, reenables bracketed paste and prints the prompt. Notably it does not move the cursor to get the prompt on its own line, so when the command didn't end in a newline the prompt hangs in a weird spot - try `printf '%s' foobar`, it'll show your prompt like "foobarbork@kiwi:/play$". There are tricks to get around this.


I think bash actually sends just "\n", the LF-to-CRLF translation is handled in the tty driver (it used to be part of the kernel, but no longer. Funny how Linux still has to translate text to use the so-called "Microsoft line endings" when it comes to terminals).


This is correct, but they're only "Microsoft" line endings (CR+LF) when you're encoding a text file. When output to a terminal, they're literal instructions:

CR - carriage return - escaped as \r - move the carriage to the beginning of the line (the "carriage" is the print head of a line printer, think an old dot-matrix or a typewriter)

LF - line feed - escaped as \n - advance the paper one line.

Since all on-screen terminals are "virtual", these are translated to cursor movements. But their origin is in paper output.

    If you've ever
                  seen text that
                                gets printed like this
...it's because the \n LF line separators in the output aren't being translated to terminal instructions, just dumped raw.

MS decided that both should be kept in text files; Unix-ish dropped the carriage return to use \n; MacOS before OSX used only \r.


There are also (text-based) network protocols; almost all of them use CRLF as line breaks since time immemorial because "text is something that can be sent straight to the teletype and should be shown all right". UNIX decided to break with this tradition, others like DEC, and CP/M, and then Microsoft, decided not to which is why I put "Microsoft line-endings" in quotes: reasonably, they are just "line-endings", have always been, and then there is the "UNIX line-ending convention".


Thank you for your explanation.

After we press the return/enter key to tell bash to "run the command", is bash doing everything from here (ls is not part of this bash part right?) including switching off the bracketed paste and re-enabling it?


Bash also turns off the bracketed paste, because it can't know if the command it is about to launch supports it. So that command would have to re-enable it itself. Something like emacs or vim might do so (or another bash, you can nest shells).

And yes, then bash starts ls, which is an external program. It might be /usr/bin/ls.

And then ls quits, and bash re-enables bracketed paste because the command might have not enabled it or enabled it and disabled it before quitting. So you get this weird bracketed paste sandwich.


For graphical interfaces we went from X11 to Wayland. Is anybody working on a replacement technology for text interfaces?


Why? I don't think there is a clear and compelling case for doing this given the ecosystem issues. Meanwhile, there's slow but interesting innovation happening in this space e.g. around kitty.



I might be really tempted to say "my terminal doesn't have any keys" if this came up in an interview.


Nothing about scancodes and mapping the physical location of the key to the character depending on the keyboard layout (QWERTY/AZERTY/etc)?

* https://en.wikipedia.org/wiki/Scancode

* https://en.wikipedia.org/wiki/Keyboard_layout


This is what happens on a classical UNIX terminal, other platforms have different workflows.


e.g. VMS would not echo back keystrokes until it was truly ready to accept input

No typeahead while a previous command is running, like you can do in UNIX et al


And many mainframe (IBM et al) terminals worked more like HTML forms — typing modified only the local screen, with a key to send the entire current state to the host.


Love this! The format for some reason made me flash back to using Telnet for IRC. lol


I really enjoyed reading this.


[flagged]


@dang can you ban this spam/bot account.


And this, guys, is the reason why it was able to do fun things to other people's computers when their operating system did not inform the modem to cease receiving incoming bytes when the line was dropped and then picked back up again. It did not care where the bytes came from; it just accepted them...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: