Hacker News new | past | comments | ask | show | jobs | submit login
I sent an Ethernet packet (github.com/francisrstokes)
426 points by todsacerdoti 28 days ago | hide | past | favorite | 126 comments



> But if I had any kind of point, it would probably be that spending the time to do things like write tools and explore the debugging space is pretty much always worth it.

> Strangely, I've found that there is a not-insignificant number of people are against this - especially in a professional environment. I think this has something to do with the JIRA-fication of development process, where all work is divided and sub-divided, and anything that does not directly check a box and produce a deliverable is wasted effort. It's disheartening that there is a whole group of people - developers and managers alike - who do not understand or see the value in exploration as part of the process.

So true, being able to create your own small tools is a superpower which is often at the heart of 10x programmers. It is sadly an art often practiced in the shadow.


I've run into this as well. There's a cog in the wheel developer and ... they have their place, and in many orgs they're the only thing that works.

But if you're not that person (and I like to think I'm not) and you are curious and stop to think about second order effects, future features that are a little obvious, and so on ... it's really painful working with those developers who do not.

I worked with one recently, not a bad person at all, but now I'm cleaning up his project that 100% did exactly what the tickets told him to do. And the project that he worked on that was designed to replace a piece of software that had issues ... has every single issue the old one had for all the same reasons (someone coded it in the easiest possible way + only what the ticket told them to).

So frustrating.

I will say this though, I find the use of the term "10x developer" a sort of buzzword smell for the kind of reckless developer with no oversight who "gets the job done" at great cost ... because they can do what they want and then everything is siloed and potentially a disaster when anyone has to touch it or they bolt when it becomes clear their tower of cards is ready to come down.

Not to say I disagree with your statement generally, just that term tends to make me worry.


I'd say that there are two kinds of 10x devs, as per who is doing the determination of '10x':

1. From the perspective of the money folks (managers & up): they get their jobs done very quickly, thus spending minimal time, meaning minimal money.

2. From the perspective of excellent systems design folks: they design an excellent system that addresses subtle problems and foresees and addresses system problems that will occur down the road.

Type #2 folks tend to not add to the org's technical debt; Type #1 folks don't GAF about such considerations -- they meet their quotas, get their perf reviews, and are on their way to their next promotion/position/project.

I'd say there is excellence for excellence's sake, and then there's money-making excellence. Sure, orgs need to make money (or at least break even) to survive, but there's a lot of overlap where being too much one way or the other will be an impediment to long-term viability.


And then at Boeing with #1 the doors fall off…


Be careful about generalizing about Boeing here; it's not really a good example. The door plug incident was entirely caused by managers, not bad engineering.

In this case, the managers knew that if they went back and removed the door plug then it would have to be inspected again. That would cost time, and the plane was already behind schedule. Their bonuses were in the line, so they got together to find a solution. The solution that they found was to skirt the inspection rules using semantics. They decided to have their employees loosen the bolts, shift the door plug a little, do the required work, and then put everything back. This allowed them to claim with a straight face that they hadn't ”removed” the door plug, and therefore it didn't need to be inspected.


It was bad engineering full stop.

Yes the root cause might stem from management but good engineering would not have the doors flying off... thus bad engineering. Regardless of everything else, engineers are responsible for their designs at the end of the day. (Yes when management only approves cheap unsafe designs)

Otherwise you are "just following orders" which is not a viable leg to stand on.


I disagree. First, remember that we’re not talking about doors here, but walls. Specifically, a door plug which is a type of wall segment that can be put into the space where room was left in the airframe for an optional door. If you cannot get that detail correct, maybe your opinion doesn’t count for much.

Second, the steps for assembly of an airplane are all very important. If any of them are skipped or left out somehow, the plane will break! You can’t engineer your way out of this problem either: the more ways you add of attaching that door plug to the airframe, the more possibilities there are for mistakes. That’s why the assembly process requires one team to install the plug and another team to verify that installation was completed correctly and according to the specifications.

Any time you have managers using semantics to weasel their way out of the inspections that verify that the plane was assembled correctly, that’s the mistake. Full stop. Fire those idiots.


You absolutely can engineer your way out of those problems. There is always a simpler, easier way to put things together.

https://x.com/SpaceX/status/1819772716339339664

I think what happened is, Boeing codified all these labor-intensive manual processes back when they were riding high. The planes were selling well, they were state of the art. Now, 30 years later, it takes the same amount of effort to put everything together without mistakes, but the relative value of the finished product is less.


Not sure what that has to to with writing software. If I take your perfectly written code and run it on bad hardware that corrups memory and a CPU that does math wrong (aka don't tighten the bolts down all the way), it's going to cause problems, regardless of if it was a #1 or #2 type engineer that designed the system.


> the doors fall off

"That’s not very typical, I’d like to make that point."

https://m.youtube.com/watch?v=8-QNAwUdHUQ


I'm kind of in that boat as a "reckless developer". I come in and write little tools to help people workflows, automate something that was a manual process or a shell script to get the job done instead of doing it by hand. Some of these scripts can grow into big projects over time depending on if I'm the end user or someone else. No one asks me to make these things, I just see a issue to resolve and I resolve it. I like to call my self a hacker really since it makes sense in the old terminology of what I do with the many hats that i wear.


> if you're not that person and you are curious and stop to think about second order effects, future features that are a little obvious, and so on ... it's really painful working with those developers who do not.

Not only that, but it's also really painful working in a system which doesn't really value your artwork because you're not on the product team, and shouldn't be making product decisions.


I understand your concerns around 10x Dev. May I suggest a different term more specific to this discussion? "Tool builder", as in, one who builds tools for them self, and possibly shares with others. I have worked with programmers that were not outstanding in terms of pure computer science, but could build a tool or two to get leverage, especially around system transparency/debugging, etc.


> It's disheartening that there is a whole group of people - developers and managers alike - who do not understand or see the value in exploration as part of the process.

To steelman, because, as Frank is sending ethernet packets, we're stuck picking up the slack, doing things that are part of the job description, that are needed by the org as a whole. Why doesn't he just innovate in the actual problem we're working on, since there's a near infinity backlog?

I think, ideally, everyone is given exploration time, but that requires a manager that can strictly defend it, even when upper management is made aware of it and inevitably says "we need <project> done sooner". It's also a problem when other managers become aware of it with "You're allowing that team to hire more to compensate for the "lost time"? We need those people!". It really needs to be an org level thing, which even Google gave up on [1].

"Unethical" solution: Pad your development time to include some of it.

[1] https://hrzone.com/why-did-google-abandon-20-time-for-innova...


> we're stuck picking up the slack

You may be on a team that has decided "developer burnout" is just an inevitable and acceptable cost of business.

> since there's a near infinity backlog?

Which is a problem in and of itself. In any case I'm only going to be able to give you like 4, real, solid hours of work a day on that log. The other 4 will be team coordination, managing that log, and stress management maybe while I hate eat my lunch.

> everyone is given exploration time

Call it "skill investment time." We're in a fast moving industry and keeping heads down for too long is destructive personally and organizationally. It's also the pathway to getting more than the basic level of engagement above.


> You may be on a team that has decided "developer burnout"

No. Burnout is a work life balance thing more than a "doing work at work that is related to the business" thing. Fun can be had outside of work hours, just as everyone else who is employed handles it. You can give people meaningful work they're interested in and still keep it related to the actual business.

> Which is a problem in and of itself.

Absolutely not. If you don't have a near infinity list, then that means you don't have a roadmap. Why aren't you thinking about the future?

> Call it "skill investment time."

Yes, and this can always be done in the context of the work. If it's not related to the work, or the business, then it's not an investment for the people paying you.


> Burnout is a work life balance thing

It's more than that. Perhaps you haven't been in a position to accept a lot of resignations in your career. Burnout is most definitely implicated by the work place itself, specific work assignments, and general office culture.

> don't have a near infinity list, then that means you don't have a roadmap

Are you talking only about startups? That would make sense. In the more general case I doubt your assertion here. Does this fragment truly sound rational to you on a second reading?

> and this can always be done in the context of the work.

The very article this thread is attached to clearly shows how false this is.


10x developers (sweeping generalization here) are often not the ones that are beholden to a manager / product owner telling them what has the highest priority though. Whether that's because they won't be told what to do, are the manager themselves, or have no manager at all is not known to me.

That said, writing your own tools if applicable is useful, however I'd argue that a productive developer writes the least code and instead uses what's off the shelf. To use this article as an example, nobody should be writing their own TCP stack, there are perfectly fine existing implementations.

That said that said, writing your own stuff is probably the best way to get a deep understanding, and a deep understanding of things is a constituent part of the mythical 10x developer. I just wouldn't expect any employer to pay you for it.


A good manager recognizes that a good 10x developer needs minimal management. Not that they don't need managing, but that too much management throws a wrench in the mental gears and drags them down, burns them out, and forces them to quit. All they need is a light hand on the steering wheel to keep them pointed at the right problems.

But sibling comment is also right. The 10x developer has more "free" time outside of their Jira tickets. Some choose to focus on the next business problem, others drift and experiment.

Either way the business problem gets solved and you retain a very skilled employee. Their explorations may turn up something valuable, or just add to their own personal dragon hoard of skills, when then usually still benefits you in the end


The reason a “10x developer” can work on whatever they want is because it only takes 10% of their work hours to complete their job requirements- they are then free to experiment and play with a lot of their time.


I second this notion. In my experience, the most efficient / effective developers get to work on all sorts of interesting problems because they can finish their "regular" work so quickly.


Yep. I've had both polar reactions:

"Wow that's really helpful. Your tool confirmed our model."

And

"Did you ask the PM first? You have to be careful with rabbit holes."

I usually just do it if it'll be under a day's worth of work. It's never gone wrong and never gone to waste. At worst I and up copy/pasting the code into something else later


Me too. #2: Run away...


Recently at work I developed a small suite tools (in a mixture of python and shell, running in WSL) that left my boss impressed when he saw me debugging a customers system (IOT).

Then he started asking me to make them accessible to non programmers, and suddenly those tools seemed a lot more than I bargained for.


I'm not sure it's Jira-fication. I think it's mapping effort back to something that has a direct result to the bottom line.

Example: I used to work for a company that basically made slot machines. We needed to produce something called parsheets to give to submit to get our slots certified and to the casino. I was given the project but no manager wanted to give any developers to implement it. No one wanted to give the money to buy a computer with enough power to run simulations to produce the theoretical hold. But this was something we NEEDED to help get our games approved.


You left us hanging: So, what happened?


Since the title is rather vague, this is the start of a series about building a TCP/IP and Ethernet framing stack from scratch for a microcontroller. The author uses a chip (W5100) which can handle TCP/IP itself, but also supports handing it pre-built Ethernet frames, although the chip handles the preamble and CRC calculation. Most of the article is about trying to communicate with the chip itself, and sending a test packet (which I'm guessing is hardcoded, although it's not called out in the article).

(I was hoping it would be about bit-banging Ethernet on some improbable bit of hardware.)


The RP2040 can be convinced to bit-bang Ethernet (yes even 100Mb/s) with only a transceiver. Part of the trick is to modify the common PHY + MagJack breakout boards to let the RP2040 generate the clock signal so that the signals are in sync and you don't have to oversample the RMII.

Just 10Mb/s can be done even dirtier. I'm still waiting for someone to combine an RP2040 with DVI/HDMI video output and Ethernet into a "modern" glass terminal (just telnet, SSH is probably asking too much).


VGA is much easier to produce, and the RP2040 can do 1280x1024@60 no problem. The official examples use several clock cycles per pixel (2 IIRC, but it might be even more), but you don't have to.

I made half a terminal with that output, aiming for something that ran the original VT100 ROM. I never finished the emulator part, but I did write video output (including scan line emulation) for VT100 memory -> VGA. Adding SSH once the rest works should be perfectly possible. (Not bit banging ethernet at the same time, but using some external chip.)

I should probably put that online somewhere. Or finish it and then put it online.


Ssh is not asking too much! In the world of irony- HDMI/DP will be far harder than ssh.


But of course it has been done: https://github.com/Wren6991/picodvi (OK, it's DVI rather than HDMI - at low resolutions HDMI isn't more difficult, just more legally contentious.)


That's incredible - I missed this when it was first released!


Someone did that with an ATTiny85 years ago:

https://hackaday.com/2014/08/29/bit-banging-ethernet-on-an-a...


I took a strange career jump recently into FPGA engineering with a focus on Ethernet. It's been a fun journey, which culminated in me finally designing my own Hard MAC IP and sending a packet over some custom PHY IP. I'd highly recommend it for those looking to try a "Hard Mode" version of this challenge. I feel like networking is very abstracted from users, so understanding how Ethernet cards, modems and switches put together and pull apart sections of packets, and how the PHY/PCS recovers signals over a link was really valuable.


A while ago (about a decade, oh no) I was involved in a project which did direct TCP from an FPGA. As part of debugging and verification, I made it so you could hook up the simulated version from Verilator to the Linux TUN/TAP device, enabling you to connect directly into it from the developer machine without tying up the physical hardware (which we only had one of). Fun project.


Renode offers a similar approach, one could even route all stimulated packets into Wireshark. It's really powerful when it works


Sounds fun! Were you already familiar with networking/Ethernet when you started? If not, which resources - if any - did you use to get a broad overview of everything involved? My knowledge of networking ends at a very basic understanding of the OSI model, and I am very interested in taking a deep dive into the networking world.


I'll tell you how I learned, I got thrown head-first into a network switch design. I knew almost nothing about networking.

The most useful resource while I was learning was RFC1812 "Requirements for IPv4 Routers" [1]

Its an ancient document written the same year I was born detailing how future routers should be built on this relatively new thing called the internet. The language is highly approachable and detailed, often explaining WHY things are done. It is an awesome read.

To be honest you don't need to finish it. I only read the first few chapters, but I googled EVERYTHING I did not understand. The first few paragraphs took several hours. Talk to LLM's if you need a concept explained. Take notes. In a few days you'll have a very solid grasp.

[1] https://datatracker.ietf.org/doc/html/rfc1812


Wireshark, plus a book about using Wireshark. I have "Wireshark Network Analysis", which looks to be shockingly expensive right now - to be fair, it's worth whatever price, but maybe not while you're speculativey getting your toes wet - and there are others out there. I am nowhere near an expert (I've only worked through about 1/3 of that book - just enough to solve the problem I had at the time!), but that's what's taught me what I know, and I know can take me further.


I wasn't at all, to be honest. I joined a multidisciplinary team and they were short an Ethernet SME. I said I'd be happy to just in feet first and learn, as I was desperate to get into anything and everything FPGAs at the time, having become bored by the CPU world I resided before. Mostly I learnt everything through building stuff using our User Guides. Everything else was just reading; the Ethernet spec, random Cisco stuff online, Wikipedia, etc. I just searched for any word or acronym I didn't understand, and read up on it (meaning of 64/66b encoding, PCS vs PHY vs MAC, PTP, OTN, FlexE, FEC, ANLT, and so on). Once you get passed the TLAs, it's actually pretty simple (except for PTP - that's the bane or my working life at the moment).


Are you me? This is exactly my job at the moment. Send me an email if you'd like to compare notes


First time seeing the retcons for MOSI/MISO: main out/subordinate in instead of master out/slave in. I'll use that so I can still refer to the pins as MOSI/MISO. The COPI/CIPO alternative (controller out/peripheral in) never worked themselves into my brain stem properly.


Master/sub would be funnier for getting a reaction out of prude Americans.


CAN bus almost got this right but someone scribbled "recessive" over "submissive". Or maybe they were thinking about strong and weak genes all along.


Gosh, thanks for translating that into terms I actually knew.


I'm baffled as to why the author is using STM32F401 with a W5100 Ethernet shield when they could just as easily use an STM32F407 board, which includes a built-in Ethernet MAC, coupled with a cheap (2 for ~$12) Ethernet PHY board. Lots of example Ethernet projects available and just as easy to develop for as the STM32F401.

Also,

> Due to the complexity of the signalling involved with ethernet, a dedicated ASIC is generally used

In the context of microcontrollers, I think this is generally not true. In most cases Ethernet functionality is incorporated as a embedded peripheral within the microcontroller itself, as it is with the STM32F407 and the ESP32 (both of which happen to use the same Ethernet peripheral sourced from an external provider).


Author here - the reason is pretty underwhelming: These are the parts I had on hand when I decided to start on the project. Using a chip with a builtin Ethernet peripheral would definitely make more sense (though I'd be trading any complexity of configuring the W5100 for the complexity of configuring STs peripheral). The networking code already abstract the actual chip itself into a driver interface (think read/write/ioctl), so porting the code would be pretty straightforward.

I'll look into the STM32F407 for the main series. Thanks


> though I'd be trading any complexity of configuring the W5100 for the complexity of configuring STs peripheral

It's not too bad as far as such things go. The documentation on how the DMA system works leaves something to be desired, but it's not bad (and it's a heck of a lot faster than spitting packets over SPI).


for some people the fun is just in learning, not necessarily doing the state of the art things.


> for some people the fun is just in learning, not necessarily doing the state of the art things.

Over a decade ago, when I was just learning Linux, I set out on a quest to turn a CentOS box into a router that could perform NAT between two different networks. I spent an entire weekend researching and following every suggestion I could find until it finally worked. I was so proud when my pings reached their respective destinations.

I took it apart the next day and never did anything more with it, but the journey and the reward was the fun of it.


Totally get and applaud that. Indeed, I built a high-performance Ethernet driver for the STM32F4 chips just because I wanted to say I've done it.

But his stated goal is to build a TCP/IP stack, not futz around with SPI and the particulars of an idiosyncratic network chip. There will be plenty of work (and learning) to do once he starts climbing up the network stack.


You must be new here. It's a law of nature around HN that whatever you post that you've done, there's some twit who will condescendingly lecture you on how you should have done it.


I'm not new here at all (this account was created in 2015). I know fairly well HN and its "idiosyncrasies" but I like to call them out from time to time.

Be the change you want to see in the world.


Yep, what I've seen today from checking ESP32 boards with Ethernet is that most (or lots) of them use it as a serial device.


Ethernet deals with frames, not packets.

Packets are an IP concept :)


RFC 791 which defines IP talks about both packets and datagrams. IP sends datagrams. Each IP datagram is fragmented into one or more packets depending on the underlying L2 network.

But RFC 791 dates from when the underlying L2 network was most likely ARPAnet, which had 128 byte packets. Today the distinction is (or should be) moot - if you're relying on IP fragmentation to send your large IP datagram over multiple L2 packets, you're doing something wrong. IPv6 doesn't even support in-network fragmentation.

In practice, the distinction has largely disappeared over time. Everyone I know talks about packets and very rarely bothers to distinguish between frames, datagrams and packets, because in practice one Ethernet frame generally carries one IP datagram, and everyone calls it a packet.


Only on HN could someone implement networking from scratch, only to be dismissed by someone in the comments implying they've no idea what they're doing.


I think the comment was implying that they got word wrong, not that they had "no idea what they were doing". Bit overly dramatic


The comment stated they got the word wrong, the implication is that it was out of ignorance.

Yet the article uses frame throughout where technically correct, and what was important to the author wasn't that any frame was transmitted, but that they got their packet to show up correctly.


I certainly disagree that the implication you read into it was there. I read the statement as a friendly bit of banter, not an implied accusation of ignorance.


Only on HN could someone implement networking from scratch, only to be dismissed by friendly banter in the comments implying they've no idea what they're doing, only for that to be misinterpreted as actual criticism, only for an analysis to be done on this comment thread :-) only for dang to appear and ask us to stop being so meta. Also, in TFA the font is 3 pixels too far to the left and it made it literally unreadable for me


talking to a chip over spi is not what id call scratch, lol


For Ethernet, packets are a physical layer concept. They encapsulate frames which are a data link layer concept.

https://en.wikipedia.org/wiki/Ethernet_frame


Packets are a TCP concept. IP sends datagrams :)


The TCP unit is called segment


No it's called a slice.


I thought UDP sent datagrams?


Both IP and UDP call their packets "datagrams".


Smartass. Did you understand the title anyways?


If you've got a hankering for wired ethernet on a microcontroller, several of the larger STM32 Nucleo boards [1] have 100Mbps ethernet built in - and at ~$25 they're pretty affordable.

Their 'STM32Cube' software gets mixed reviews, but it will spit out a working example of ethernet communication.

[1] https://www.st.com/en/evaluation-tools/nucleo-f439zi.html


WT32-ETH01 about $7 on Aliexpress if you want something ESP32 based. As easy or easier to get started with than the STM parts in my opinion. Comes with MQTT, HTTP clients & servers etc.

When I last used Cube-MX, it was a very unpleasant experience throughout. I'd use stm32-hal or libopencm3 out of preference if I was using the parts again. The tool itself and the code it spat out had all sorts of nasty bugs and edge cases that cost days of debugging. Maybe it's improved since.

https://github.com/egnor/wt32-eth01


> WiFi is internal to the ESP32 and works "out of the box", but wired Ethernet takes a bit of configuration for the WT32-ETH01.

I recently discovered the ESP32 and my only complaint so far is that Ethernet (and with it PoE) is not a first class citizen of the platform.


I'm intrigued by this RISC-V ESP32-P4 board with Ethernet that should be around $20 ...

https://liliputing.com/waveshare-esp32-p4-nano-is-a-tiny-ris...

Edit: I just saw there are way cheaper modules, although probably not as performant, like the WT32-ETH01


>Their 'STM32Cube' software gets mixed reviews, but it will spit out a working example of ethernet communication.

Sometimes. On some hardware it's broken.


> Their 'STM32Cube' software gets mixed reviews

Thankfully there's more than one CMake projects on github as an alternative.


I think the middle ground of generating code using MX and taking it from there using cmake is a really good place to start.


There's a distinct "nineties" feel to STM32Cube.


Cube is built on Eclipse which isn’t too bad. If you want 90s, ARM and others have you covered. I still sling Dynamic C 9 from time to time.


note that if you want to play with writing your own network stack on linux, you can use `socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL))` and operate on not too dissimilar level of abstraction as this is.

       SOCK_RAW packets are passed to and from the device driver without
       any changes in the packet data.  When receiving a packet, the
       address is still parsed and passed in a standard sockaddr_ll
       address structure.  When transmitting a packet, the user-supplied
       buffer should contain the physical-layer header.  That packet is
       then queued unmodified to the network driver of the interface
       defined by the destination address.
https://man7.org/linux/man-pages/man7/packet.7.html


And to do it from the other direction, you can open a tun (virtual IP) or tap (virtual Ethernet) interface just as easily. This adds a virtual interface to the network stack (e.g. a VPN interface), while a packet socket does the opposite and lets you communicate directly through with a real interface.


Achtually, I think you meant Ethernet frame. pushes glasses up


we should call them parcels at all layers and then everyone will be equally annoyed.


Technically correct is the best kind of correct.


Nicely done! I just spent my last 16h of work time on implementing the reverse: parsing ethernet2 (with vlans!), ipv4+6 & UDP up to an automotive IP protocol. Use case is to understand a proprietary bus capturing stream that forwards ethernet frames (among other things) nested in ethernet frames that I receive on a raw socket. Fun stuff! Wireshark & Chatgpt are invaluable for this kind of task


A short story in the data analysis space, a place where clever and underpaid people are discouraged from materializing logic into tables.

It took me a while to realize the data consumers (those who need to look at and understand data) didnt have any orchestration tools, except a creaky UI cron type of scheduler.

For my work I refused to compromise and spend all my days clicking in a UI, so I searched around for a good tool, decided on Prefect. It served me well, but I didn't realize going in that Prefect/Airflow/Argo etc are really orchestration engines and you still need to write the client software around it to serve your own (or the team) productivity.

For example, connections to sources, methods to extract from source with a sql file, where to put the data after, injecting into publication channels, etc. I all gradually ended up writing functions to support this for my own personal IC work. I sank a ton of time into learning, had to learn how to develop python packages (I chose poetry), on an on. And yet despite all that time spent grinding away on indirectly productive learning I have still been more productive than my peers.

It was a mystery so I spent time learning about my peer workflows, which people are so cagey about at my company. Anyway, everybody just crams 100s of lines of business logic into Tableau custom sql sources to avoid scrutiny of data artifacts like tables they might create on the data lake. I guess these are tableau flavored views, but its so hard to read and understand all this logic in the UI -- oh and the calculated fields also.

I guess to sum up, if I am keeping up with peers using the latest expensive enterprise 'data storytelling' service while I self educate on basically data engineering to stats presentation plots, spark, julia, python, etc. I think I have concluded that Tableau should be considered harmful.


It should be also noted, that you can write your own network stack on Linux with the help of AF_XDP[1].

[1] https://www.kernel.org/doc/html/v6.9/networking/af_xdp.html


How does this compare to a packet socket?


I really like digging into the "magic" like this. If anyone interested I have a notebook that goes from here to being able to send pings across the internet (which actually are packets): https://github.com/georgek/notebooks/blob/master/internet.ip...

I was going to implement TCP, but got bored (it suddenly becomes way more difficult)...

It's just so amazing to me to behold the entire network stack and the fact it actually works (even if IPv4 is clinging on at this stage).


As a hobbyist I wish that electronic gods would invent a way to bit-bang real world protocols. It is possible with simple protocols, but when we move to MHz range, it moves to impossible. Some kind of co-processor which would work at, say, 100 GHz speed with very limited instruction set. So I could implement some simple protocol parsing and buffering for more complex processing at lower speeds. Probably not possible, but who knows...


The rp2040 and its newer sibling can be clocked north of 100 MHz and have special hardware called PIO (programmable input/output) that runs at one instruction per cycle and can do multiple IO operations per instruction.

That's a microcontroller. I wonder if more powerful chips can do more.


The main issue with that is that the PIO does not have decent support for external clock inputs, let alone any kind of clock recovery. It can easily send a 100MHz signal, but it can't reliably receive one.

The PIO allows for single-cycle or dual-cycle data output, which essentially just pulls bits from a FIFO and puts them on some pins. The "side-set pin" makes it really easy to generate a clock signal as well. If the target devices samples on rising edge, it becomes essentially "output data, clock low; nop, clock high; implicit repeat" or even "output data; implicit repeat" if the target can do its own clock recovery or uses DDR. This is great, because you can output a lot of data at quite high speeds.

But the opposite is a lot harder. The PIO has no native clock support, so you're left simulating it yourself with a multi-instruction operation. If you need even as little as two instructions to receive a bit, that 133MHz MCU clock already limits you to a 66MHz signal! In theory you could use an external clock input - but those pins are limited to 50MHz. You could also feed the external clock into the main oscillator input - but that breaks a bunch of other functionality, especially if the external clock isn't reliable. Oversampling isn't an option either: you'd have to read the pins at 4x the signal frequency (limiting you to 33MHz) and dedicate a lot of computing resources to that.

In other words, high-speed input is only really an option if you don't care about synchronization, such as a free-running logic analyzer. But bitbanging 100BASE-T Ethernet? Don't count on it. Even the 2-bit-wide 50MHz RMII interface has proven to be quite challenging.


Thanks for the insights. I've only written one PIO program, and it's for a DHT22 sensor. I divided the clock down to 1 MHz so I could count microseconds. Really, CPU bit-banging would have worked fine at those speeds.

Now that I think about it more, you're right. Best case scenario for reading a bit would be to wait for the pin to go high/low, then push the corresponding bit into the input shift register, and have that automatically flush to DMA every 32 bits, say. Can't do better than some fraction of the clock speed because of the multiple instructions needed.


You're basically talking about FPGAs. Sure, for >500 MHz (depending on the FPGA) you'll need to use the integrated transceivers, but they're flexible and support the physical layer of many protocols.


This is basically what FPGAs offer, although above 1GHz the transceiver design becomes difficult and expensive. 1GHz should be possible on hobbyist-priced parts.

100GHz is .. well, can anyone find any part with that capability? Seems to top out around 10GHz for ethernet transceivers.


I think he meant 100MHz. In case he actually meant 100GHz, I would gently remind him that light only travels ~3mm in 10ps


Raspberry RP2040/2350 microcontrollers are more than capable of bitbanging up to 100-300 MHz range by using PIO and HSTX. Higher end with some overclocking.



What use cases are you imagining where you need arbitrary data output and processing at 100GHz speeds? It's my understanding that even 100GbE is running at a fraction of those frequencies.


To be able to spend multiple cycles for processing a bit. Or process multiple bits arriving at the same time. Also it might be necessary to measure signal multiple times. May be 100 GHz is too much... For example I wanted to bit-bang FM radio by measuring antenna signal, that's around 100 MHz, so I need to probe around 200-300M times per second and perform at least minimal processing, I guess.


I realize FM radio is strictly an example, but would you not rely on bandpass sampling? Where you sample at some multiple of your bandwidth and rely on the spectral replication effect to get your waveform.

Ref: https://en.wikipedia.org/wiki/Undersampling (funny enough, this article explicitly calls out the FM radio use case).


You need a very stable clock for that, which was also called out as a thing. With some PLLs you could lower the needed frequency. I think you're really looking for a small FPGA though.


There's also XMOS (effectively, software defined hardware). But ain't going to do those speeds.


I used the W5100 before with Arduino but I am not sure about "gold standard" quality. I had a lot of issues with random disconnects several times a day with this board. Maybe it's the board, or maybe it's the arduino code or the connection. Or maybe it was my code. I don't know. Eventually I switched to a pi and never had any connection problems anymore.

When it comes to tcp connections linux seems more robust.


How quickly can you get from tinkering with Ethernet on Arduino to e.g. creating 10Gbit router with FPGA?

I don't want to sound bad, but it's a bit like writing BASIC programs on C64 today? Fun, entertaining, but is it actually useful for developing skills in this domain?


If there is ambition to work at high speeds and feeds I would jump straight to FPGA and bypass anything like this WIZnet ASIC which simultaneously does too much and too little to be relevant to that pursuit (but it is great if you have the exact use case of wanting to add Ethernet to an existing MCU).

Doing Ethernet at the logic design level is a lot of hard work but it isn't exactly a mystery at this point unless you are working on next generation speeds and feeds. A book on digital design, a book on HDL of choice, and a willingness to read specifications (IEEE 802, SFP MSA) and datasheets (FPGA MAC, third party PHY chips) is what you need to be successful.

NetFPGA collates a lot of teaching materials and example code for this but the major FPGA vendors also have relevant works. Ignore any suggestions that you need formal education in the space, it's simply not true.


At a minimum, 4 years in college. The things required to do the 10gbit router are things you will never run into, learn, or be challenged by bit banging SPI with an existing Ethernet peripheral.

Pretty much by definition, the engineering required for "high performance" anything is completely divorced from the knowledge required to implement basic systems.


>Fun, entertaining, but is it actually useful for developing skills in this domain?

Yes. Yes, always. You always learn, no matter what you do. Why build an 8-Bit CPU if you we have very complex 64 bit ones nowadays? Because the fundamentals are mostly the same.


The physical layer is different. The data transferred across it is exactly the same.


Back in the late 90's, when there was very little hand holding, I build a little web server board using an 8051. It was fun to learn the internals of a TCP/IP stack and magical when it served its first web page. Dang was it slow though!


I'll plug my own attempt at doing this a few years ago, https://www.youtube.com/watch?v=H8w0eFXaXjI

The moment I received my first packet on a cut-up wired headset I used as a transceiver makeshift tenceiver it felt like something clicked and I just began to understand more how the universe works. I recommend projects of this type and wish more folks did it.


Nice. I like those Ethernet chips. These days, you can find MCUs with built-in Ethernet hardware, but I still enjoy working with simple AVRs, so the w5500 (basically an upgraded w5100) is my go-to.

I wrote a little framework (bootloader, utilities, etc.) to allow for updating firmware and tunneling stdout on AVRs over pure Ethernet.

https://github.com/jakemoroni/Micro-Protocol-SDK


I was recently bringing up a custom board with a W6100 chip and it’s funny how closely my experience paralleled this one.

I also had (a different) issue with SPI communication (the W6100 uses a little different protocol).

I still couldn’t get any Ethernet packets to send/receive with a real machine, so I did basically the same debugging step: bought an off the shelf W6100 dev board and planned to use their arduino library to compare against. That’s where I left that project.


You may want to also consider ENC28J60 'Stand-Alone Ethernet Controller with SPI Interface' which lacks a hardware TCP/IP stack and is purely an Ethernet MAC and PHY accessible over SPI. This simplicity means it should be easier for you to get started with sending an ethernet packet. But of course if you want TCP/IP you will need to supply your own software implementation instead.


Interesting to see the Law of Leaky Abstractions at work: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

I admire this kind of low-level, first principles exploration which I personally don't do enough of.


is there already a "networking from scratch" type of content somewhere -- not focussed on implementing it in hardware -- but maybe a tool for interactively playing with bits, bytes, packets, frames, etc to simulate each layer of the stack and see how 2 or more devices on a network including switches and routers might communicate with each other


I remember many many years ago, learning about networking, I got very fixated towards constructing and sending my own packet over the wire. Good times.

I'll never forget Packetfu (https://rubygems.org/gems/packetfu)


Incidentally, once the author gets his Ethernet interface work he could use this to test it: https://github.com/jaylogue/tiny-echo-server


I've done a similar thing with the exact same microcontroller! However it was on the "Black pill" board.

It was really fun and surprisingly helped me get ahead in the networking class I'm taking.


This guys youtube channel (low byte productions) is an absolute gem if you like low level stuff.


This is great, I love reading these low-level debugging war stories.


As an exercise in self-learning, I thought this was wonderful.


.


..


> Due to the complexity of the signalling involved with ethernet, a dedicated ASIC is generally used, which takes in data at the frame level, and takes care of wiggling the electrical lines in the right way over a cable (though there are of course exceptions).

Like others in this thread I was hoping I’d see an RJ45 breakout board directly hooked into an STM32 and doing onerous bit banging and keeping it as low level and basic as possible.


Author here. CNLohr already did it better than I could 10+ years ago: https://www.youtube.com/watch?v=mwcvElQS-hM


If you are looking for a business setup in Dubai, then we are here to help.

For Consultation, Contact Us Now! Website - https://www.bizex.ae/ Email - info@bizex.ae Call - +971 444 73414 Address - Office No-304, Al Mankhool Building (BMI Building), Khalid Bin Al Waleed, Bur Dubai, Dubai




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: