Hacker News new | past | comments | ask | show | jobs | submit login
A Chip to Bridge the USB 2 – USB 3 Divide (hackaday.com)
106 points by zdw on March 7, 2022 | hide | past | favorite | 58 comments



The unstated is that the core problem here (lack of USB2 on a USB3 connector) is _NOT_ standard. This is generally only a problem on the plethera of shit arm/etc SoCs using garbage IP where the vendor is only interested in checking a box, than actually providing a working/compliant implementation of the specification.

Part two of this problem are the USB2 implementations that don't provide USB1 Transaction Translators (TT), so keyboards/etc won't work.

And as a 3rd note, the Pi4 actually has good USB3 because its implemented by a 3rd party XHCI controller like one might find on a random PCIe card. Combined with a usb3 hub with multiple TT's it should be possible to attach a number of SDR's etc at reasonable rates. Good luck figuring this out from the usb3 hub details unless it lists the chip its using.


> the core problem here (lack of USB2 on a USB3 connector)

Not sure if that's the core issue here? That's one of the issues, but there are more uses for that chip, like the introduction that hints at sharing USB3 bandwidth between multiple USB2 devices.


Yeah, this.

A person not familiar with this situation would think that if a USB3 hub had USB2 ports on it, it would do packet level buffering and switching so that the 5-10Gb/s of the USB3 upstream would be shared among the 480Mb/s USB2 ports. It should be reasonable to get 8-10 USB2 ports all operating at their maximum bandwidth off of 1 USB3 port. After all, that's how Ethernet works.


And the reason why that doesn't and can't work with USB2 is because, unlike Ethernet, USB is extremely closely tied to its physical layer and you cannot actually buffer and switch USB packets at all (without cooperation from at least one side). All communications are host-driven and stateful, and the timeouts for replies are on the order of single-digit microseconds, and the receiving end has no obligation to accept data, so there is no way to safely buffer data and forward it at a later time because that time might never come, and the mere act of accepting data from one end desyncs both ends' ideas of what data was successfully transferred, which can have higher level protocol implications. USB is a clusterfuck of a standard.


XHCI is extremely buggy on Linux with recent stable kernel versions. For example I plug in a USB3 DAC into front panel, and all my USB devices shut off. XHCI will spew error msgs in dmesg during this.


TBH that sounds like your hardware is bad. I’ve had issues like this and when I did the tcpdump of the usb traffic (using usbmon), the hardware is always malfunctioning in response to correct messages sent by the kernel.


You cannot diagnose/debug host controller hardware/driver issues with usbmon. It does not work at a low enough layer. It's only good for debugging issues with downstream devices. You also cannot tell whether certain problems are the host controller's fault or the downstream device's fault with it. You need a physical USB protocol analyzer that works at the packet level for that (usbmon works at the transaction level).


That’s exactly my point. The OP claimed that the kernel’s XHCI support is faulty, usbmon will rule that out.


It won't, because you can't know if the misbehavior is caused by a device or the host controller when you're looking behind the host controller. Everything you see in usbmon is influenced by both. And besides, a malfunctioning device can't cause your entire USB controller to bork; that indicates a bug in the USB controller or its driver by definition, making usbmon inherently unreliable in that scenario.


Whether it’s the device or the host controller is irrelevant here. The OP claimed the fault was in the kernel’s XHCI implementation. Usbmon is enough To rule out the kernel in vast majority of instances. That is the subject of this thread. No one ever claimed usbmon can differentiate between errors in the host controller or device. “Hardware issue” includes both and does not include the kernel.


I don't know how you plan to tell apart kernel xHCI driver bugs from xHCI controller bugs with usbmon, which has nothing to do with the xHCI layer. It's at a higher layer. It doesn't show you anything related to xHCI internals. You can't know if any given behavior came from the HC or the kernel, what happened really, or anything else.

Source: I wrote the virtual xHCI implementation in QEMU (and worked around broken Windows drivers in it), I have experience with this.


I’d be willing to bet my life that the kernel’s more than a decade old and globally deployed XHCI implementation is not the cause of his USB issues.

Is it technically possible that usbmon will say that it sent proper data to the device when in fact it didn’t due to a bug in the kernel XHCI stack? Yes it is. Is it likely? very much not.

Usbmon was invented for this exact use case: debugging usb issues under the assumption that there is no fault in the kernel’s usb stack. It should be used before even considering a USB protocol analyzer. Basic intuition tells us it’s a hardware issue, either controller, hub, or device.


> I’d be willing to bet my life that the kernel’s more than a decade old and globally deployed XHCI implementation is not the cause of his USB issues.

I wouldn't, because I've found and fixed bugs in the kernel's xHCI implementation, and I still regularly panic or oops my kernel with dodgy USB devices, which is by definition a kernel bug (and a security one at that). USB is an overcomplicated standard and extremely difficult to implement properly.

> Is it technically possible that usbmon will say that it sent proper data to the device when in fact it didn’t due to a bug in the kernel XHCI stack? Yes it is. Is it likely? very much not.

As someone who works with USB regularly, I very much disagree with your assessment.

> Usbmon was invented for this exact use case: debugging usb issues under the assumption that there is no fault in the kernel’s usb stack.

Usbmon was invented for debugging USB issues with a single device, under the assumption that there is no fault with the kernel's USB stack and the controller. Once you're having controller-global issues that could be caused by either of those, usbmon is not useful because that assumption no longer holds.

> Basic intuition tells us it’s a hardware issue, either controller, hub, or device.

We clearly have very different intuitions here. I would absolutely split my bets on it being a controller or kernel issue.


> I still regularly panic or oops my kernel with dodgy USB devices

All intuition differences aside, if you are basing your intuition on whether a usb bug is in the kernel based on your experience dealing with dodgy devices, then shouldn’t your intuition agree with mine? Root cause tends to be dodgy hardware


The device is buggy and the kernel is buggy. The device is buggy because it did something stupid; the kernel is buggy because it crashed in response. That's two independent bugs. Sometimes it isn't even buggy devices, just devices disconnecting at the wrong time. I've crashed my kernel with things as simple as devices rapidly reconnecting or simply ceasing to respond/timing out.

Most of the kernel bugs around this are state/race issues. Device disappears in the middle of kernel code that doesn't have proper error recovery, boom. That's a kernel bug that applies in normal circumstances too, not just under an adversarial device model, because USB is designed to be hotpluggable at any time. Doesn't matter if the device disappeared because it crashed or because the user yanked the cable; it's just easier to reproduce with a crashy device. And the error recovery logic is notoriously hard to get right, which is why I'm not surprised the kernel is still buggy after all these years.


> which is why I'm not surprised the kernel is still buggy after all these years.

allegedly still buggy :)


Definitely still buggy; I know for a fact it still oopses when certain USB things go wrong, which is by definition a bug.


What? Why don’t you report this bug or submit a patch. If you have, can you link me to your report or patch?


>Good luck figuring this out from the usb3 hub details unless it lists the chip its using.

Any recommended hubs with the right chip?


>The unstated is that the core problem here (lack of USB2 on a USB3 connector) is _NOT_ standard.

It does say this at the end of the article.


Semi-related: Is it possible to monitor USB hub bandwidth in a meaningful way?

The only time I've ever noticed bottlenecks is when things stopped working, or work intermittently, and sometimes informs (via kernel messages) that USB is overloaded... but it strikes me odd that we have top/iftop/radeontop/intel_gpu_top/etc but I'm not aware of anything for monitoring USB hubs.

EDIT: well crap, I guess I never looked; there exists USBTop: https://github.com/aguinet/usbtop -- leaving my stupidity so others may learn.


Wireshark can read USB communication.


I love me some packet-sniffing fun!


You might get more useful things from /sys/kernel/debug as well, but you'll need to get familiar with the spec. It would let you know how a device is doing something wrong and can be made better.


Someone needs to make this into a product. It's silly that there isn't a single USB 3 hub on the market that can use USB 3 bandwidth to connect a large number of USB 2 devices at their full bandwidth. I've really wished for such a device in the past and I think it would sell pretty well.


It is impossible to implement that. The USB standard for some inexplicable reason refused to specify USB 3 to USB 2 transaction translation. That's why it doesn't exist. There is no way to implement it that works as you would expect. It's just not possible. I will never forgive the USB-IF for this gaffe, it is ridiculous.

There are two nonstandard ways to do it. One is what this chip does, which is pretend all USB 2 devices behind it are USB 3. It literally virtualizes and rewrites desctiptors and does a whole pile of horrible hacks to get it to work. It works most of the time, but not always, and it is actually impossible to make it 100% reliable due to peculiarities of the standard. USB is impossible to tunnel or masquerade transparently at the link level due to its design, and since this device tries to do that to bridge incompatible link types, it can never be 100% spec compliant; there will always be corner cases where it breaks down. It's a hack.

The other way, which is implemented by some other niche hub chip whose name I forget, is to use a proprietary protocol to encapsulate USB2 in USB3, effectively making a USB2 host controller that just happens to connect to the host CPU via USB3. That can be done in a 100% spec-compliant and reliable way as far as downstream devices go, but then of course the upstream side is not standard, so you need special drivers for it.

There will never be a USB 2 to USB 3 translating hub that works as you'd expect because it's just not possible to build one, because the USB-IF are notoriously bad at designing standards. Seriously, so much about USB is a giant clusterfuck, from USB2's differential-but-not-really physical layer to the monster that is USB-PD.


There's two other ways. One is to use a USB 3 to PCI express bridge and then functional pcie USB controller chipsets. I'm not aware of any such bridge silicon existing.

The second way is what you said, to skip the bridge and create a USB host controller that connects though via USB. Again, I'm not aware of the existence of any such silicon.

Both could be implemented without any standards fuckery. Both would require new silicon and new host OS drivers.

You could make your own such device by glueing together USB host and device controllers with an ARM or somesuch. But again, you'd need a host OS driver.


Both of those ways are what I said, done in different ways. PCI-in-USB3 is not a standard protocol. No matter what you do, there is no way to make it work within standard class protocols only.

> There's two other ways. One is to use a USB 3 to PCI express bridge and then functional pcie USB controller chipsets. I'm not aware of any such bridge silicon existing.

Not USB3, but this is what Thunderbolt docks/hubs do, use embedded PCI USB host controllers for the downstream ports. Which is great! But not for long: the USB-IF took over Thunderbolt and surprise surprise! USB4 includes a new USB3 tunneling mechanism but still not USB2. Yes, really. USB4 docks are expected to still have a ridiculous sidecar USB2 hub just handling USB2 duties and squeezing all that traffic into the USB2 lane all the way to the host. Because that makes sense. Of course, since PCI tunneling is still a thing and standard in USB4, at least you can ignore that nonsense and keep doing it the Thunderbolt way, but I think we're going to see fewer and fewer solutions that do it this way, and USB4 docks which do USB3 tunneling and USB2 passthrough will be a regression over TB3 docks that include full host controllers. You can thank the USB-IF for this.

> The second way is what you said, to skip the bridge and create a USB host controller that connects though via USB. Again, I'm not aware of the existence of any such silicon.

Such silicon exists, I just can't find it right now. It's another very niche thing in a similar vein to the VL67x from another manufacturer.


usb4 still preserves a legacy usb2 connection pair.

on the one hand it's kind of absurd. on the other, a long distance capable, supported everywhere wire protocol is great to be able to assume. displayport's aux channel is a great example of why keeping legacy connectivity around on a second track: it makes lowhgrade kvm like connectivity ultra easy & low cost. it's, ultimately, a pretty good thing that the 480mbit/s pipe is available for use by cheap & common devices. i dont see what else we could do.

(aside from mandate that each usb-4 hub integrate a usb-2 over usb-4 adapter like this on each port, which probably makes sense, but also probably would be uncertifiable, would be likely to cause some random device misbeaviors... whitequark will probably have found a dozen random constraints by the end of the week to this chip.)


How wild is it that 480 Mbit/sec is low speed and low cost?

Like, it makes sense on one level, but it's just crazy to me how fast that still is compared with PS2 (16kbps), RS232 (115kbps), CANbus (1Mbps), USB 1.1 (12Mbps), 10/100 Ethernet (100Mbps), Firewire (400Mbps).

Obviously totally inadequate once you're talking to solid state storage or if there's video/display data in the mix. But still... 480Mbps is fast.


What USB-IF should have done is specify USB3 to USB2 and USB4 to USB2 transaction translation, like they did for USB2->USB1. It doesn't mean they need to get rid of the USB2 wires, but it means utilizing the 5Gbps+ USB3 bandwidth to host multiple downstream USB2 devices without bottlenecks would be at least possible, and you can always fall back to USB2 hub mode if USB3 is not available.

By refusing to make that part of the spec, they have doomed us to always being limited to a total of 480Mbps per host port for USB2 devices, no matter what you do or how many you have sharing it, whether that host port is USB3 or USB4 (unless you use PCI mode). That is ridiculous.


Hello dear masterworker of systems. I reply humbly here- as a neophyte to a master- when I suggest, maybe possibly, perhaps, it could be really really hard to specify & guarantee exactly how to ensure that usb2 capability would be entirely, 100%, assuredly unaffected by a usb translation layers. I also think it eases certain design systems constraints.

I'm not an expert. The evidence that USB2 was able to translate & carry usb1 is a good counter example to my premise, that the work was done & it worked out fine, and usb1 carries fine over usb2. For a while, I do not believe the USB transaction translator[1] was not totally commonplace: I remember articles specifically highlighting which usb2 hubs had this feature. I dont remember this feature ever causing compatibility problems, but USB1 had such a small feature profile, such limited concerns that it feels like this task was more managable. Once we start adding isosynchronous profiles & perhaps various other usb2 gimmicks, I imagine the transaction translation problem becoming much more thorny; I imagine trying to insure in fact that usb1, usb2, usb3 devices would all translate successfully would be much harder. I'm sympathetic in extreme to the notion that there should be a spec, that we should have tried to define something (and perhaps the specification without mandate would have been great, as usb2 did). But I also still feel like it would/could have been a risk, could have become thorny. That doubt doesn't convince me, but I want to acknowledge that doubt.

I also think there is some modest technical advantages. Making a hub that passes through usb4, but which also exposes a variety of simple peripherals is much easier without mandating the transaction translator. It's super convenient to make a hub that passes through usb3/4 but which offers a variety of audio, keyboard, mouse, et-cetera connecitivyt options, at a reduced speed: one can just drop down some cheap-as-dirt usb2 hubs in their solution & glue in those worlds of peripehrals. Demanding all connectivity happen over usb4 & usb4 alone would have mandates much much more expensive solutions. That said, I'm not convinced the savings are worth the cost. If we simply had mandated we do good, build better, I suspect over time we would have found the cost of doing better to be negligible.

[1] https://en.wikipedia.org/wiki/USB_hub#Transaction_translator


> For a while, I do not believe the USB transaction translator[1] was not totally commonplace: I remember articles specifically highlighting which usb2 hubs had this feature.

You are misremembering. All 2.0 hubs have TTs. The question is whether they have one (same problem as USB3, all ports share a single 12Mbps pipe in 1.0 mode) or multiple (can aggregate multiple 12Mbps devices without bottlenecks). What USB3/4 have done is doom us to a world equivalent to multi-TT hubs not existing. Except it's worse because with single TT hubs, at least you can still chain multiple to a parent hub and aggregate without bottlenecks at that level.

> Once we start adding isosynchronous profiles & perhaps various other usb2 gimmicks, I imagine the transaction translation problem becoming much more thorny

Thorny, perhaps. Impossible, no. This is trivial to prove: sticking a PCI USB2 host controller behind a Thunderbolt/USB4 bridge works fine, and that is for all intents and purposes a USB2 to TB/PCIe "transaction translator", and PCIe itself is more like Ethernet and can be tunnelled over almost anything. The USB host transaction layer (which is a pretty 1:1 map to xHCI) is tunnelable just fine (this is how USB over the network protocols work too, with host support). It's the wire layer that isn't, because once you drop to that layer you can't reconstruct or control the original transaction states properly. What USB2 did for TTs is extend the protocol in such away that the missing information/cases could be handled properly, with cooperation from the host. There's no reason why they couldn't have done this for USB2 to 3 too.

> Making a hub that passes through usb4, but which also exposes a variety of simple peripherals is much easier without mandating the transaction translator.

As I said, they could still keep the USB 2 lines. Having TT support in the standard doesn't mean hubs have to implement it. We'd still have the differentiation between TT-ful and TT-less hubs if you want. The problem is, right now, TT-ful USB3 hubs aren't possible. I agree that having the 2.0 lane in Type C makes a ton of sense. Just let me tunnel it over the SS lanes too!

Source: I've written USB support for projects way too often, including a bare-bones OHCI driver once. I also authored the virtual xHCI implementation in qemu.


> displayport's aux channel is a great example of why keeping legacy connectivity around on a second track: it makes lowhgrade kvm like connectivity ultra easy & low cost

On this topic, I wish it was used a bit more.

What do you mean with that? Do you actually tunnel keyboard and mouse data trough that aux channel? I am not aware of it being supported by any OS? Or are you only referring to DCC data, using it to switch inputs? (Which you can also do on HDMI and VGA screens).

If there is any alternative to HDMI CEC on the DisplayPort side, I'm all ears. That and standardized audio support are features where HDMI currently stands out, IMO.


That kind of mess, but worse, is what thunderbolt used to do. It would cut off USB signals (2 and 3) when the port switched modes, and downstream USB support depended on pcie->usb chips.

https://www.reddit.com/r/UsbCHardware/comments/mjz2pu/usb4_a...


That's better, because with the PCIe layer and a real host controller on the dock/hub, you finally get 480Mbps per port for USB downstream devices. Thunderbolt hubs are the only standards-compliant way to do this, today.

USB4 with native USB3 tunneling but no native USB2 tunneling is a regression over TB3, and as silicon switches from the TB3 way to the USB4 way with native tunneling, we're now bringing back this problem, all thanks to the USB-IF's incompetence at designing standards. There is absolutely no reason for USB3 and USB4 not to support USB2 encapsulation/transaction translation the same way USB2 did with USB1.


> That's better, because with the PCIe layer and a real host controller on the dock/hub, you finally get 480Mbps per port for USB downstream devices.

The hypothetical I was comparing against when I said 'worse' was also delivering 480Mbps per port.

The reason I say worse is because this chip converts USB to USB, while those hubs did PCIe to USB, which is much more complicated and a security hazard and doesn't work on many devices. It wasn't about the merits of tunneling as a general idea.

Also do thunderbolt hubs have a controller per port? Not just one per hub?


A single USB xHCI controller is not bottlenecked to 480Mbps globally, since the multiple ports aren't actually sharing a 2.0 physical layer. The "root hub" isn't like a normal hub in that sense. So you don't need a controller per port, as long as the controller is designed to support this.

Quoting the xHCI spec:

> In past generations of USB host controller implementations, there was a 1:1 correspondence between a host controller interface and USB bandwidth. The xHCI diverges from this model in that it enables vendors to tailor the bandwidth available through its root hub ports to the needs of the vendor’s target application space. The xHCI can support the legacy model where the bandwidth of a single USB is shared across all its root hub ports, a “bus per port” model where the full bandwidth of a USB is available on every root hub port, or any combination in between.

I believe most real 3.0 implementations do it per port, or at least have a bunch of instances, because it doesn't really make any sense to do otherwise. You're not really saving much money by somehow having a global bottleneck equal to the PHY bandwidth. Of course, it's possible for the controller to be bottlenecked upstream in some other way (e.g. by DMA bandwidth), but it would be very weird to find an xHCI controller with a single 480Mbps domain for all 2.0 ports IMO.


This behavior is already done to translate 1.5 and 12Mbps packets into 480Mbps. USB 1.1 didn't do that and 1.5Mbps data effectively stole bus time from higher speed traffic.


> if you have a USB 3 hub plugged into a USB 3 port, multiple USB 2 devices plugged into it still cannot break through the USB 2 uplink limit of 480 MBps.

I had a weird project where I ran into this. Or rather, I was actually aware of it and tried to work around it. I wanted to attach a large number of optical drives with sata to USB converters to a single port, basically a hot attachable ripping tower kind of thing. To assure a single port would provide enough bandwidth I needed USB 3.0 throughout. But SATA to USB 3.0 converters that exist seem to have...shaky support for optical drives as they're only really meant for SSDs/HDDs. My project ended in at least temporarily failure due to that unreliability.

I remember reading about this device and not finding it a reasonable solution. In addition to not being easily available, I'd need one device per port to break the uplink limit. That wouldn't save me any money and it would be a cabling nightmare. If the chips were a reasonable price and easily available I might have looked into it more seriously anyway though.


> USB 2 uplink limit of 480 MBps.

That should be "Mbps", usb 2.0 max is 60 megabytes per second. Odd that even hackaday gets this wrong from time to time.


> Odd that even hackaday gets this wrong from time to time.

Is it? Isn't 'hackaday' just various unvetted user contributions?


I haven't frequented them in a while, but it used to be a mix of guest articles and articles written by staff, commonly about user submitted stuff.


Oh ok, you're probably right (I've ended up there occasionally e.g. via HN but never frequented) I just misunderstood.


There's a bunch of weirdness. Lack of USB2 to USB3 translation is one of them, but there's also a lack of USB3 and USB2 concurrency.

E.g., if I put a USB 2 device on a bus with a USB 3 flash drive I would find constant stalling due to the USB2 bus activity. It even went so far that the output of `lsusb -t` would show the speed downgraded to 480Mbps at the bus level and not just the port. This has since been changed, but I notice stalling and bad performance. This is supposed to be fixed in USB3.1 and 3.2 but I am not sure we're going to see it in practice. The original spec does not say they are supposed to happen at the same time but the spec authors have made themselves clear that they were supposed to operate concurrently.


My pet peeve are webcams. You can only plug in a few before you run out of bandwidth.


Maybe you'd have more chance using USB 3 drives directly, unless you actually require a 3.5 inches form-factor? Another option might be to use IDE drives...


I'm using optical drives in case, so 5.25". But yes, I believe I'm going to end up removing USB from the equation entirely and just go straight SATA connection, possibly with a port multiplier solution.


I ran into a similar problem when trying to extend a Mac Pro 4.1 with a USB-C/Thunderbolt card and an USB-C to 4x USB-A hub on my desk. High speed USB devices like a HDD worked just fine - but my mouse and keyboard did not. The reason took me a while to find out: These "add Thunderbolt"-style cards carry only a USB3/TB PHY and pass USB2 D+/D- right through to a connector which you are supposed to plug into your motherboard's onboard USB connector.

I was just thinking "wtf, who invented this madness... are you serious?"... also, how much money would it have cost Intel to put in an USB2 PHY into the TB chipset? A couple cents maybe... that's just bonkers.


Reading the comments, apparently the Valve Index uses one of these chips. Supply is so bad nobody is selling this chip.


I posted this in the Hackaday comment thread as well: If I am not mistaken, Valve instigated the creation of this part as an element of the VirtualLink standard. This standard achieved four lanes of DisplayPort and USB 3.0 over a USB-C connector by reusing the USB 2.0 pins and using a USB 2 to USB 3 bridge.

I was on the Oculus side of VirtualLink. It made sense as a standard and overcame some of the shortcomings of USB-C, but never really got off the ground.


You sound like an expert, can I ask at what point does it make more sense to create a custom connector rather than a custom chip?

Not creating a custom connector is how we ended up with the Nintendo Switch dock debacle, and to this day legions of Switch owners refuse to charge their switches with any random USB-C cable (despite it being perfectly OK).


Was there any actual problem besides 15V not being available on many chargers?


Certain aftermarket docks would destroy Switches in certain circumstances. The Switch had lower voltage requirements than a particular protocol specified.


In my biased opinion USB 2 fallback is a misfeature. Most devices that use USB 3 for a good reason do terrible things when run at USB 2 speed, and it requires expertise to figure out why things aren't going the way they're supposed to.

On the other hand a mouse and keyboard will work pretty much the same...


The main problem I’ve observed with fallback is that 3.0 devices plugged into 3.0 ports sometimes get detected as 2.0 devices. Sometimes there’s a spontaneous port disconnect and the reenumeration causes it to be detected as USB2.0.

If I recall correctly this was primarily only happening with a non-standard optical repeater cable so it could be the cable was doing something unexpected (eg 3.0 enumeration was taking longer than some timeout and it was going into 2.0 mode).

In general though users preferred functionality, even if performance was reduced.


thatcs not what this chip is for/does. from the second paragraph:

> If you have a USB 2.0 device and a host with only USB 3.0 signals available, this chip is for you.


A USB2 to USB3 divide? Curse you, social media.

If only there were a chip to bridge other divides.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: