If you have to debug at that level, and you're not designing hardware, things are really bad.
Some years back, Wes Irish at Xerox PARC tracked down one of the great mysteries of early coax Ethernet - why was throughput so much lower than theory predicted? For this, he got both ends of a building-length coax with many machines on it connected to one office, so he could plug both ends into a storage scope. If the waveforms disagreed, somebody was transmitting when they shouldn't. Storage scopes with large storage were rare then. It was an expensive LeCroy unit.
After the end of each Ethernet packet on coax, there is a brief "quiet time", and then the next packet can be sent, beginning with a sync pattern. The hardware detects if what it is sending does not match what it is receiving, which indicates a collision. Both senders stop, wait a random time so they don't collide again, and retry. This is how "carrier sense multiple access - collision detection", or CSMA-CD, works at the hardware level.
This setup revealed that something on the cable was transmitting a single spike after the end of each packet, during the "quiet time". That reset the "quiet time" timer in the network interface, which inhibited the transition to "look for sync" mode. So the next packet would be ignored.
The quiet time timer was at a very low level - software did not see this event.
What came out of looking at the waveforms was the surprising result that the spike during the quiet time was not coming from either the data source or the destination, but from something elsewhere on the cable. The spike was not synchronized to the packet just sent.
With the waveforms for both ends of the cable visible, speed of light lag revealed both that this was happening and where it was coming from, as distance along the cable.
It turned out that several brands of network interface used a part which contained the quiet time timer, the sync recognizer, and the transmitter power controller. When the timer ran out, the device did a state machine transition, and during that transition, for a nanosecond or so, the transmitter turned on. It wasn't supposed to do that. This generated a spike on the cable, resetting every quiet time timer and causing the next packet to be silently ignored by all stations.
The network interface didn't need to be active to do this. Being powered on was sufficient. One device with that part could halve the data rate on a coax Ethernet. Thousands of network interfaces had to be scrapped to fix this.
As an electronics guy, sometimes it is really reasuring to look what happens on the wire, because it bisects your problem space.
E.g. I had a student once who was frantically trying everything on the raspberry pi to fix some motor control script he made. I suggested he should just write a minimum workable code first and then check if there is a signal on the wire. Turns out the wire was not connected.
Especially with hairy problems looking at the actual signals can be useful. Granted, doing so for UDP/Ethernet frames might be overkill, but it is nice to see it can be done.
Wow -- that's an incredible story! (Was this ever formally written up somewhere? Definitely feels like it deserves it!) I am especially curious about using the lag; it sounds like both ends of the cable were plugged into the same scope? That must have felt exhilarating to finally find!
Also, some things never change: the scope that Matt was using for this is (still) an expensive LeCroy unit...
Yes, both ends of the cable were plugged into the same scope. I got to see the scope with the waveform at PARC, because I was asked to take a look at the result.
There was quiet faxing of waveform pictures to certain IC and board manufacturers. Not much publicity. This was in the 1980s. Ethernet was a niche product.
Huh - I believe Wes gave me a flying lesson this summer. That or there's another ex-Xerox PARC guy training out of Palo Alto who looks like him! Cool story :)
I know a little bit about Ethernet, having used it since the early 10Base-2 days as well as having done a bit of "hacking" in an effort to decode 1000Base-T.
This article is focused on the QSGMII PHY chip, without which it would be practically impossible to use an oscilloscope for any meaningful troubleshooting.
1000Base-T is the most commonly used Ethernet variant in use today. It uses four balanced differential pairs for signalling. Each pair transmits and receives simultaneously. This works because 1000Base-T is a point-to-point connection and the PHY contains a hybrid that subtracts the voltage being transmitted from what is received. This makes the problem of "tapping" the interface harder because the third party observer sees a jumble of data and cannot easily determine what data is coming from either of the two connected devices. Each pair can send/receive five possible symbols, represented by voltages of +3.5, +2, 0, -2, -3.5 volts. In addition, the PHY "scrambles" the data being sent with a LFSR (https://en.wikipedia.org/wiki/Linear-feedback_shift_register) seeded with either a "master" or "slave" value. (The master/slave identity is negotiated between the two peers when they detect each other.) It gets even more complex with hamming codes and an abstraction layer to separate the control/data planes. There's also a complicated DSP for line equalization.
Good question – it's actually sampling at 50 GS/s (the scope's maximum sample rate), then upsampling by interpolation.
The higher sample rate is useful, because it effectively gives higher precision when finding the zero crossings; however, you could reduce the .wfm size by sampling at 50 GS/s on the scope then upsampling on the computer.
That would be better. Upsampling on a scope is dangerous, it's just too easy to fool yourself about what you're really seeing. Much better to use a slightly more clever algorithm on the PC than just "nearest crossing".
Also just that the interpolation and curve fitting algorithms on scopes are often utter garbage.
We tested the 5 Series predecessor to your 6 Series and I was amazed to see 10 or so volts on a 3.3V logic signal... until I remembered to check the interpolation setting. There was, in fact, no measurement above 3.3V (ok, within the usual tolerances) but the dumb shit machine had made things up and displayed them as if they were real. Even though they would have meant there was a serious hardware fault (not impossible, it was a new design going through bringup and there were in fact higher voltages on the board!), nope, it was the fault finding tool that was faulty.
I've had zero interest in the 5 Series since that demo week. Terrifying machine. (And it was supposed to have been "debugged" by that time.)
You can use the 8b/10b decoding directly on your Tek MSO scope to do most of the preprocessing here. You can even trigger on errors. It would be easier than analyzing the analog stream directly.
I'm not sure, I follow. I believe you describe what is often called equivalent-time sampling. That however obviously works only on strictly periodic signals. UDP packets would be a fairly large waveform, requiring a corresponding large buffer and they would have to be repeated with very precise timing. Further, I don't see the need for such an absurd high sampling rate for signals transmitted at a mere 5Gbps (50GSps - itself a pretty high sampling rate - should be plenty). I sense some confusion.
The Signal Path has a teardown of the Keysight UXR oscilloscope, which has a BW of 110GHz and samples 4 channels at 256Gbps/channel, with 10bit resolution: https://www.youtube.com/watch?v=DXYje2B04xE.
In general the slow part is getting the data out of the ADCs, so by interleaving multiple you can increase the speed. To reduce costs, if you have a repetitive signal, you can trigger the ADCs at different points in the waveform to build up a better idea of what is going on.
For scopes that can do these high sample rates they frequently use two lasers and have the scope capture the beat between them as a demo of how fast they are.
That's a trick called equivalent input sampling. Basically shifts the sample clock a little bit around while looking at the same waveform. The catch is it only works on repetitive waves, and you are still limited by the analog frontend's rise time.
The fastest scope I know of is a 256 GSPS done by interleaved ADC's. Analog bandwidth is 100 GHz Here's a recent submission on the bad boy:
There are so many levels of mind-boggling engineering between the analog electrical signal going into the PSU to the packets travelling across the wire/air that I take for granted. This post really helps me visualize how complex even the simplest of network processing truly is.
For oscilloscope/logic analyzer work I've encountered Sigrok as the goto tool to use (yet I have not used it myself yet, lacking the hardware at the moment). They have suites of different parsers for nearly every network protocol, which can also be applied on top of each other.
You can also write your own sigrok decoders pretty easily. I wrote one for decoding IR remote control codes once as part of debugging a device that converts codes from one remote into different codes for a different device.
It wasn’t the employability that attracted me to software (back then, it was still a nascent industry, so the employment was still spotty); it was that software doesn’t really have limits.
The physical realm is full of limits. Much of electronic design is about working within hard, physical boundaries (I was an RF engineer, so there were a lot of boundaries). For example, you could see the “ringing” in the scope traces (those little “bumps” near the corners of the voltage level transitions). Ringing can have serious consequences in real life, and things like track length, or even solder burrs, can affect it. Then, you have attack and decay, which can be affected by things like cable length. Basically, there’s no such thing as a perfect square wave.
Also, when you are running at GHz frequencies (not really an issue, back then), every solder burr is a microwave transmitter. That’s fun.
Software didn’t have these “fences.” I could go pretty much wherever my imagination took me.
I’ve always been a cantankerous bastard. I don’t like being told where I can’t go.
Knowing the physical realm helps a lot, when it comes to understanding software, though. I’m glad for it.
Speaking of oscilloscope is there any analyzer recommended which is versatile and for budget use? Mainly for reading voltage readings and for capturing data packets. Under $100 USD.
Can those scopes transmit the data in real time somehow?
It would be neat cause then you could wrap the decoding bits in a library that implement the libpcap interface. Then any existing tool like tcpdump/tshark/zeek would just work and it would effectively be a software defined network tap.
Considering that the data rate of the configuration specified at the scope is ~14Tbps, not really. Even the data rate that the scope is interpolating from is ~745Gbps, while 400Gbps Ethernet is still fairly rare and mostly limited to backbones.
> ... this means we should catch 1-3 UDP packets.
> After hunting down a USB key, I ended up with a 191M .wfm file to process.
assuming 3 packets, that's about 64MB of raw data per ethernet frame. The tshark output shows a 82 byte frame. That works out to 818400:1 overhead. Close to 1MB of .wfm data per 1 byte of packet data.
I suppose you could do some processing or compression closer to the input, but then you're just building a nic at that point.
we are actually doing that now at https://fmad.io for SDR instead of raw oscilloscope ADC. SDR is about an order of magnitude less data e.g. 6GS/sec instead of 60GS/sec and Matlab as the backend instead of Python.. as its a bit faster when mining terabytes of data for that ah-ha wtf bit.
Surprisingly cool going from raw IQ/analog data to ethernet frames much like the op has written.
Some years back, Wes Irish at Xerox PARC tracked down one of the great mysteries of early coax Ethernet - why was throughput so much lower than theory predicted? For this, he got both ends of a building-length coax with many machines on it connected to one office, so he could plug both ends into a storage scope. If the waveforms disagreed, somebody was transmitting when they shouldn't. Storage scopes with large storage were rare then. It was an expensive LeCroy unit.
After the end of each Ethernet packet on coax, there is a brief "quiet time", and then the next packet can be sent, beginning with a sync pattern. The hardware detects if what it is sending does not match what it is receiving, which indicates a collision. Both senders stop, wait a random time so they don't collide again, and retry. This is how "carrier sense multiple access - collision detection", or CSMA-CD, works at the hardware level.
This setup revealed that something on the cable was transmitting a single spike after the end of each packet, during the "quiet time". That reset the "quiet time" timer in the network interface, which inhibited the transition to "look for sync" mode. So the next packet would be ignored. The quiet time timer was at a very low level - software did not see this event.
What came out of looking at the waveforms was the surprising result that the spike during the quiet time was not coming from either the data source or the destination, but from something elsewhere on the cable. The spike was not synchronized to the packet just sent. With the waveforms for both ends of the cable visible, speed of light lag revealed both that this was happening and where it was coming from, as distance along the cable.
It turned out that several brands of network interface used a part which contained the quiet time timer, the sync recognizer, and the transmitter power controller. When the timer ran out, the device did a state machine transition, and during that transition, for a nanosecond or so, the transmitter turned on. It wasn't supposed to do that. This generated a spike on the cable, resetting every quiet time timer and causing the next packet to be silently ignored by all stations.
The network interface didn't need to be active to do this. Being powered on was sufficient. One device with that part could halve the data rate on a coax Ethernet. Thousands of network interfaces had to be scrapped to fix this.