I eventually learned the subject properly through High Performance Browser Networking. This is the one book I would recommend to any software developer. Available for free here - https://hpbn.co
I've easily had a dozen times where I was able to resolve a complex problem or figure out how to do the thing I wanted to do through examination of this flowchart.
I just wish someone would come up with an associated easy-to-digest guide-for-dummies for a few different real world scenarios.
The most practical thing I had to do with networking cane up in a job interview where I paired with someone to work on some low later stuff.
Not sure that starting with http requests is a good idea you need to start at layer 1 and work up.
(BTW, David Clark, one of the inventors of IP, told me over beers at a long ago Interop that the only reason there are 7 layers is because ISO arbitrarily set up 7 subcommittees to study the problem, and when they couldn't agree on where to draw the lines between the layers when they reconvened, they just made the diplomatic compromise of sticking with the subcommittee boundaries. That's why the OSI model so poorly reflects real world protocol implementations. AFAIK, other than the OSI protocols themselves (MAP/TOP, etc.), which unwisely used the model as an implementation guide, the only protocol to cleanly map to the OSI model is X.25. Does anyone actually use any OSI protocols anymore?)
OK, here's my conspiracy theory:
At a certain point, the OSI people knew they were losing. However, they were the people in charge of the big institutions, so they made a push to ensure that history books would be re-written such that, well, of course we all use the OSI Model, of course the Internet people implemented OSI, there was never a debate, don't be silly! Eastasia and Oceania have always been at peace, and that's why, here in Airstrip One, you need seven layers to describe how anything which moves over a network works!
... and the fact the OSI Model was attached to actual protocols is silently forgotten, as is the fact the ARPANet/Internet people were mildly opposed to "layers" as a conceptual model and never took the OSI Seven-Layer Model as a design document!
I think the idea is that we could examine Wireshark frames for the relevant layers.
I'll definitely have to check out the rest of their 'compendium': https://www.destroyallsoftware.com/compendium
I've served the technical writing role a few times.
If it (your product) is hard to describe, you probably did it wrong. At the very least, keep trying until things make sense. Better mental models, metaphors, workflows, whatever.
I was once asked (by a school principle, former writing teacher) why software developers are such terrible writers. I replied that all the good software developers I know are also good writers. That if you can write an essay, you can also code. The problem is that most people are terrible writers, programmers included.
I will admit that writing is harder than programming. Because people are far more interesting, complicated, nuanced than computers.
Reading someone else's code is the closest thing we have to mind reading. More so than prose. IMHO.
Miscommunication and ambiguity is the norm. We all just have to accept that and keep trying.
This was revolutionary back in the days when most protocol specs were proprietary and designed to protect the priesthood (usually of a particular vendor)rather than facilitate interoperability. It's hard to imagine now, but one of the big reasons TCP/IP won was that it actually encouraged interoperability and interworkability. (Jan Stefferud drew a distinction between those two terms in this context...)
The corporate pricing could include a commitment to update some percentage of docs within 12 months of a relevant RFC being published or whatever.
Keeps purchasing happy, is a way for folks in the enterprise to get their employer to fund some resource and still gets it to mostly remain a labor of love.
I attend a conference that works that way. Essentially there are two conferences at the same time, with identical badges (no conference name on the badge itself), one of which costs about $800 and one of which costs about $3000 IIRC, if you buy all the special upgrades. Any company that pays the for the "corporate" conference gets listed as a sponsor as well :-).
His explanations focus on the most important bits (which he has skillfully prioritized based on his expert knowledge) and avoid highly domain specific nomenclature i.e. he gives a thorough-yet-concise explanation in a way that a reasonably intelligent lay person can understand.
Does that make sense?
It broke programming down to its most fundamental and important building blocks without all the baggage of machine/operating system/application/dependency/etc specific stuff.
Nowadays, 8b/10b (or more realistically 128b/130b) is critical to enable clock recovery by making sure the signal transitions frequently enough.
> Computers can't count past 1
Reminds me of this excellent quote: "Every idiot can count to one" - Bob Widlar
This line of thinking makes clearer what signal reflections are and why they are a problem, and what the role of eye diagrams is.
Back when most internet packets were single characters representing keystrokes over something like Telnet, an algorithm was invented to wait a moment before sending ACKs in order to possobily piggy back out on the next data packet.
This ended up interplaying with TCP slow start and adaptaive congestion control for years and was only resolved in the early 2000s if if I recall correctly.
The inventor of that algorithm posts here frequently if he wants to comment on my post about this again. :)
And this is why engineering is interesting. Your design choices can have lots of unintended side effects.
Adjust to what? Ethernet is not synchronous its asynchronous. There is no shared clock.
The interframe gap goes back to the days of CSMA/CD. The interframe gap was the period during which end stations would contend for the shared medium. Without this an end station could continuously stream and monopolize the network.
It took 6 Ethernet frames to send a single 8K NFS block, and regaining the carrier could be pretty time-consuming if there was other traffic. So SGI implemented a really elegant cheat that technically violated the Ethernet standards, but dramatically improved performance for the entire network, and especially their NFS servers: They simply never let anyone else get a chance to talk until they'd sent the entire block, "hogging" the carrier by going straight from the last bit of the previous frame to the preamble of the next one. It was a very clever way of more-or-less getting jumbo frames years before they became part of the standard!
I believe 802.11n reduced the interframe gap down to a two microseconds for transmissions to the same receiver.
> As an example, Cisco ASR 9922 routers have a maximum capacity of 160 terabits per second. Assuming full 1,500 byte packets (12,000 bits), that's 13,333,333,333 packets per second in a single 19 inch rack!
The ASR 9922 is 20 slots of 3.2Tbps per slot. That's a 64Tbps chassis so that's 5,333,333,333 packets per second in a single 19 inch rack. Cisco's 160Tbps number is their hypothetical multi-chasis setup. Which is fun to market but non-sensical to build/purchase.
Now, why not a NACK rather than relying on duplicate ACKs? First, the NACK can get lost, so you'd probably want to trigger fast retransmit off of duplicate ACKs anyway for efficiency. But when should the receiver send a NACK? Given that packets can be reordered, likely you want to wait for a few subsequent packets to arrive before you send the NACK. In that time, you've been acking the arriving packets. So your NACK will arrive just after the sender has concluded the packet was lost from the arriving duplicate ACKs. At this point the NACK is serving no purpose.
Even if you wanted to change TCP and assume no packets were reordered, so send a NACK as soon as the packet after the missing one arrives, you'd still be ACKing the arriving packet (which due to TCP's cumulative ACK appears to the sender as a duplicate ACK). The sender could just as well retransmit after one duplicate ACK and get he same behaviour. In the end, sending a NACK doesn't really add anything.
I thought about "I lost a packet" message, but if it got lost in transmission, sender would never know to resend the packet. Instead, the sender had the responsibility of keeping track of which packets it received an ACK for. If it didn't receive the ACK, it would resend the packet within a certain interval and wait for another ACK.
If the receiver is missing a packet, it can just XOR the last 10 packets plus the correction packet to get the missing packet.