Hacker News new | past | comments | ask | show | jobs | submit login
Network protocols for anyone who knows a programming language (destroyallsoftware.com)
635 points by kalimatas on Jan 23, 2019 | hide | past | favorite | 71 comments



I took a networking course in college and I didn't learn much, if anything. We used textbooks like Kurose and Ross that went deep into details like the header format of each packet of each layer. Ultimately, these were useless details that had no place in a textbook. It made me hate the subject.

I eventually learned the subject properly through High Performance Browser Networking. This is the one book I would recommend to any software developer. Available for free here - https://hpbn.co


I'd largely echo your sentiment, but luckily our professor had a hands-on attitude. This involved actually coding things that used those concepts. We had to implement a non-recursive DNS resolver and a FTP server. Reading the RFCs and chewing out something that worked (mostly) was real fun.


Same here, we did examine headers and learn about the layers of networking, but we were also given the link to https://www.ietf.org/rfc/rfc1035.txt and told to implement a DNS resolver in C. We also did a peer-to-peer client. You learn a lot more by implementing it.


I did the exact same projects in my 3rd-year networking class


That book is one of the most useful engineering books I have ever read. Top quality theory with practical application.


Interesting, I think that textbook is one of the best I’ve read on networking. It gave me the foundational knowledge on approaching networks.


Can anyone recommend an excellent resource on how IPTABLES work? One with diagrams and real world examples?


I cannot recommend this specific diagram highly enough: https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilte...

I've easily had a dozen times where I was able to resolve a complex problem or figure out how to do the thing I wanted to do through examination of this flowchart.


Yup... agreed.

I just wish someone would come up with an associated easy-to-digest guide-for-dummies for a few different real world scenarios.


This is what I used to learn about IPTABLES: http://homes.di.unimi.it/sisop/qemu/iptables-tutorial.pdf


That's sad to hear. I agree that colleges can be too textbook-centric. I'd read a chapter and get nowhere. Did your college have a lab or projects? Mine had us built a rudimentary FTP server, telnet server, a toy instant messenger client, etc. I don't think I could have learned networking simply on the textbook alone. Also playing around network sniffers, network explorers, etc can help.


Same with my college networking class. The professor talked a lot but we didn't do much besides look at HTTP requests in Wireshark.

The most practical thing I had to do with networking cane up in a job interview where I paired with someone to work on some low later stuff.


They didn't have you setting up networks with real hardware in a LAB ?

Not sure that starting with http requests is a good idea you need to start at layer 1 and work up.


This is pretty frequently debated. You can find textbooks that start at L1 and work up the stack, and others that start at L7 and go down.


Starting at Layer 7 is a fundamental mistake, because it perpetuates the execrable and obsolete OSI 7-layer model. Since IP protocols DO NOT map cleanly to the OSI model, trying to shoehorn IP into it causes more misunderstanding than anything else I've seen in networking. Seriously - IGNORE the 7-layer model if you want to understand real-world protocols and their design, especially anything IP-flavored.

(BTW, David Clark, one of the inventors of IP, told me over beers at a long ago Interop that the only reason there are 7 layers is because ISO arbitrarily set up 7 subcommittees to study the problem, and when they couldn't agree on where to draw the lines between the layers when they reconvened, they just made the diplomatic compromise of sticking with the subcommittee boundaries. That's why the OSI model so poorly reflects real world protocol implementations. AFAIK, other than the OSI protocols themselves (MAP/TOP, etc.), which unwisely used the model as an implementation guide, the only protocol to cleanly map to the OSI model is X.25. Does anyone actually use any OSI protocols anymore?)


> Does anyone actually use any OSI protocols anymore?

OK, here's my conspiracy theory:

At a certain point, the OSI people knew they were losing. However, they were the people in charge of the big institutions, so they made a push to ensure that history books would be re-written such that, well, of course we all use the OSI Model, of course the Internet people implemented OSI, there was never a debate, don't be silly! Eastasia and Oceania have always been at peace, and that's why, here in Airstrip One, you need seven layers to describe how anything which moves over a network works!

... and the fact the OSI Model was attached to actual protocols is silently forgotten, as is the fact the ARPANet/Internet people were mildly opposed to "layers" as a conceptual model and never took the OSI Seven-Layer Model as a design document!


While we learnt from Layer 1, all the setup was in software so we didn't get any hardware.

I think the idea is that we could examine Wireshark frames for the relevant layers.


It seems every intro to networking goes into the headers of each layer's most common protocol.


That's a good thing imho, at least if accompanied by something that gives you the more practical perspective on layers 4 and above as well (which is probably where many curricula fail). Of course it's more interesting to do fun stuff with fancy JSON APIs or create a funky WiFi mesh, it's even likely to be more relevant to most people's future job since not that many people focus on networking. But it's a foundation that can be interesting and help many optimizations down the road, it simply is not academias primary and exclusive focus to prepare students for some job. Just like many people take other stuff from CS programs that you simply don't learn in a 3 month Python course, undertanding of networking protocols is something that _may_ help and give you an advantage for future problem solving. It might even be thought inspiring enough for you to focus on in the first place.


Great stuff. Thanks for sharing that link.


With the current state of generally terrible technical writing and my ever decreasing attention span it's bloody refreshing to read a concise and well written explanation of a highly technical subject!

I'll definitely have to check out the rest of their 'compendium': https://www.destroyallsoftware.com/compendium


"current state of generally terrible technical writing"

I've served the technical writing role a few times.

If it (your product) is hard to describe, you probably did it wrong. At the very least, keep trying until things make sense. Better mental models, metaphors, workflows, whatever.

I was once asked (by a school principle, former writing teacher) why software developers are such terrible writers. I replied that all the good software developers I know are also good writers. That if you can write an essay, you can also code. The problem is that most people are terrible writers, programmers included.

I will admit that writing is harder than programming. Because people are far more interesting, complicated, nuanced than computers.

Reading someone else's code is the closest thing we have to mind reading. More so than prose. IMHO.

Miscommunication and ambiguity is the norm. We all just have to accept that and keep trying.


Not only that, the very idea of clear, literate explanations is really a core part of the entire Internet ethos, heavily influenced by UNIX. (Read http://theody.net/elements.html - you'll be glad you did...)

This was revolutionary back in the days when most protocol specs were proprietary and designed to protect the priesthood (usually of a particular vendor)rather than facilitate interoperability. It's hard to imagine now, but one of the big reasons TCP/IP won was that it actually encouraged interoperability and interworkability. (Jan Stefferud drew a distinction between those two terms in this context...)


When I was doing my CCNA the cisco press books where well written and went from first principals.


Agree, and love that guys domain name.


I love it too. Sadly, not to long ago he considered changing it because it makes their materials a hard sell to enterprises.


Some people have a "normal" site with "normal" pricing and an "enterprise" site with different formatting, pricing, etc (for example it could have a search box, even if all the searching is just a google search anyway).

The corporate pricing could include a commitment to update some percentage of docs within 12 months of a relevant RFC being published or whatever.

Keeps purchasing happy, is a way for folks in the enterprise to get their employer to fund some resource and still gets it to mostly remain a labor of love.

I attend a conference that works that way. Essentially there are two conferences at the same time, with identical badges (no conference name on the badge itself), one of which costs about $800 and one of which costs about $3000 IIRC, if you buy all the special upgrades. Any company that pays the for the "corporate" conference gets listed as a sponsor as well :-).


There's this service [0] which is designed to set up quick WordPress installs for demos. The free tier uses the domain poopy.life and if you want to get a more professional domain to show off to clients, you pay extra.

[0]: http://poopy.life/


What do you see as an issue in technical writing, if anything past not concise or well written?


I suppose it boils down to why the Feynman lectures are generally considered a gold standard in technical presentations.

His explanations focus on the most important bits (which he has skillfully prioritized based on his expert knowledge) and avoid highly domain specific nomenclature i.e. he gives a thorough-yet-concise explanation in a way that a reasonably intelligent lay person can understand.

Does that make sense?


Agreed. This is also why I adore the SICP lectures from the 1980s [1] (and the book in general).

It broke programming down to its most fundamental and important building blocks without all the baggage of machine/operating system/application/dependency/etc specific stuff.

[1] https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54...


The Feynman lectures are also not useful to learn physics from, so you may want to avoid that as a standard.


That does, thanks for sharing.


One interesting note re: 8b/10b encoding. The motivation in the article is accurate, but not the whole story (never is, in the analog world).

Nowadays, 8b/10b (or more realistically 128b/130b) is critical to enable clock recovery by making sure the signal transitions frequently enough.

> Computers can't count past 1

Reminds me of this excellent quote: "Every idiot can count to one" - Bob Widlar


Yeah, that section is a little ropey; the real way to think of high-speed networking systems is not that they send levels but that they send edges. Levels can drift around all over the place, and in a fast system the level at one end is not the same as the level at the other.

This line of thinking makes clearer what signal reflections are and why they are a problem, and what the role of eye diagrams is.


The Art of Electronics[1] has a fantastic section on differential signaling and common mode voltage. That book is a treasure even today despite being published initially in 1980.

[1] https://en.wikipedia.org/wiki/The_Art_of_Electronics


Oddly, nothing has changed in the way God made the universe in the intervening years. Some things are timeless.


The slow start mentioned in the transmission control section has an interesting history.

Back when most internet packets were single characters representing keystrokes over something like Telnet, an algorithm was invented to wait a moment before sending ACKs in order to possobily piggy back out on the next data packet.

This ended up interplaying with TCP slow start and adaptaive congestion control for years and was only resolved in the early 2000s if if I recall correctly.

The inventor of that algorithm posts here frequently if he wants to comment on my post about this again. :)



It is awesome that two of the wiki references are the author's comments here on HN.


> HTTP/2 has header compression because of the RAM limitations of networking devices in the late 1970s.

And this is why engineering is interesting. Your design choices can have lots of unintended side effects.


> An interpacket gap of 96 bits (12 bytes) where the line is left idle. Presumably, this is to let the devices rest because they are tired.


Kind of! They allow systems with slightly different clock rates to adjust. A system sending data nominally at 10 Gb/s can actually run slightly faster to a system running slightly slower b/c their timing bases (crystals) oscillate at slight different frequencies.


>"They allow systems with slightly different clock rates to adjust."

Adjust to what? Ethernet is not synchronous its asynchronous. There is no shared clock.

The interframe gap goes back to the days of CSMA/CD. The interframe gap was the period during which end stations would contend for the shared medium. Without this an end station could continuously stream and monopolize the network.


Back when I worked at Chevron in the early 90s, I noticed that new SGI workstations were way faster NFS servers than anything else, even the Suns. It took me a bit in those days (Ethernet network analyzers were over $50K then, so hard to borrow even in a big, rich company) to figure out why:

It took 6 Ethernet frames to send a single 8K NFS block, and regaining the carrier could be pretty time-consuming if there was other traffic. So SGI implemented a really elegant cheat that technically violated the Ethernet standards, but dramatically improved performance for the entire network, and especially their NFS servers: They simply never let anyone else get a chance to talk until they'd sent the entire block, "hogging" the carrier by going straight from the last bit of the previous frame to the preamble of the next one. It was a very clever way of more-or-less getting jumbo frames years before they became part of the standard!


Interesting. I knew someone that worked for an ISP in 90's who had an odd SGI workstation amongst their standard SUN boxes for hosting customer sites. The SGI box was regarded to be faster than the newer SUN boxes. Eventually all customers were migrated off of the SGI box except one customer who refused and threatened to sue. I wonder if the reason for the performance gains of the SGI were the same as yours.

I believe 802.11n reduced the interframe gap down to a two microseconds for transmissions to the same receiver.


No, that was the preamble.


No, that's incorrect. The preamble is used to denote the start of a frame. It's used by the receive circuit to sync to - i.e it derives its clock from those preamble bits. It's also how the receiver distinguishes between a frame an noise.


Is it also to allow other nodes on the network to communicate, especially if they're on the same collision domain?


No. Wired ethernet hasn't used collison-based protocols in a long time. It's point-to-point now, not shared.


Actually, even though most connections are switched point-to-point these days, any properly functioning implementation (definitely up through 100Mbps) must still implement CSMA/CD. It's required for proper functioning of Ethernet hubs and some bridges!


In ethernet (at least at higher speeds; I can't speak offhand for <=1G), an average IPG of 12 bytes is used to let the start of frame symbol be 8-byte aligned. So for example a series of 65 byte frames would be separated by alternating 11 and 13 byte IPGs.


The network stack is one of the most beautiful inventions in the whole tech space to me. All the involved layers nest so neatly, allowing to switch out any of them as necessary and simply unwrap at the receiving end. After years of witnessing programmers inventing new dependency injection schemes, I still can't switch from MySQL to Postgres by simply swapping the driver. Compare that to transmitting TCP over avian carriers ;)


I love this article, but I just can't help but pick at this nit I have with the "charging the capacitors" part. I'm not intimately familiar with Ethernet specifically, but I am an electrical engineer. Since the voltage on a capacitor is directly related to the charge it has stored ( scaled by a factor called it's capacitance :-) ), holding the capacitors at any fixed voltage for any length of time does not change how "charged up" it is at all. I would believe that there is a "line balancing" goal to the transmissions, but I'd be willing to bet it is to avoid driving the isolation transformers into saturation, and has nothing to do with low pass filter capacitors...


My nitpick would be the example Cisco router:

> As an example, Cisco ASR 9922 routers have a maximum capacity of 160 terabits per second. Assuming full 1,500 byte packets (12,000 bits), that's 13,333,333,333 packets per second in a single 19 inch rack!

The ASR 9922 is 20 slots of 3.2Tbps per slot. That's a 64Tbps chassis so that's 5,333,333,333 packets per second in a single 19 inch rack. Cisco's 160Tbps number is their hypothetical multi-chasis setup. Which is fun to market but non-sensical to build/purchase.


I'd love to see a similar explanation of the most common routing protocols for programmers.


Why doesn't TCP have an "I lost a packet" message? Is it just that no one thought of it until too late? Hacking around the problem by sending multiple ACK's sounds inefficient.


How would the receiver know a packet was lost? If only one packet was sent, it cannot know, so you always need a retransmit timer at the sender as the ultimate backstop for reliability. Any other trigger of retransmissions is an optimization.

Now, why not a NACK rather than relying on duplicate ACKs? First, the NACK can get lost, so you'd probably want to trigger fast retransmit off of duplicate ACKs anyway for efficiency. But when should the receiver send a NACK? Given that packets can be reordered, likely you want to wait for a few subsequent packets to arrive before you send the NACK. In that time, you've been acking the arriving packets. So your NACK will arrive just after the sender has concluded the packet was lost from the arriving duplicate ACKs. At this point the NACK is serving no purpose.

Even if you wanted to change TCP and assume no packets were reordered, so send a NACK as soon as the packet after the missing one arrives, you'd still be ACKing the arriving packet (which due to TCP's cumulative ACK appears to the sender as a duplicate ACK). The sender could just as well retransmit after one duplicate ACK and get he same behaviour. In the end, sending a NACK doesn't really add anything.


Don't know about TCP. In the 90s, I hand implemented a data/file transfer protocol over trunking radio - a very data unfriendly medium with pretty bad loss rates.

I thought about "I lost a packet" message, but if it got lost in transmission, sender would never know to resend the packet. Instead, the sender had the responsibility of keeping track of which packets it received an ACK for. If it didn't receive the ACK, it would resend the packet within a certain interval and wait for another ACK.


Thanks to cumulative windowing, it does have a dropped packet signal: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#...


QUIC uses forward error correction which is pretty neat. QUIC sends an XOR sum of the last 10 or so packets or so, just in case one gets dropped.

If the receiver is missing a packet, it can just XOR the last 10 packets plus the correction packet to get the missing packet.


I love the ending... just goes to show that anything you build may be around for years longer than you expect.


I think we'd have http header compression regardless of packet size. Headers usually make up the majority of an http request. It makes sense to compress that.


Though we actually have 9k jumbo frames, which work quite well for bulk data transfer (over fast links).


That's a romantic way to look at technical debt! :-P


It's not technical debt, it's more like technical ex-girlfriends.


Awesome stuff, thanks for sharing (didn't know it's subscription based)


Excellent article! Love everything published by DAS.


Such an interesting read. Thanks!


very well written. thank you!!


What a fantastic article. Thanks!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: