Hacker News new | past | comments | ask | show | jobs | submit login
USB3: Why it's a bit harder than USB2 (ktemkin.com)
237 points by panic on Oct 7, 2020 | hide | past | favorite | 154 comments



I recently learned that USB3 is not only badly designed, flaky and unreliable, it is also an EMI/RFI nightmare. This is woefully understated in the article when it says:

"It's hard not to generate harmful interference."

We built a prototype sensor payload that included a USB3 external hard drive. Started suffering broad spectrum interference that stomped all over L-band (Iridium and GPS) reception. Made the entire system unusable!

Some references on USB3 noise:

USB 3.0 Radio Frequency Interference Impact on 2.4 GHz Wireless Devices (by Intel) https://www.usb.org/sites/default/files/327216.pdf

see especially Figure 2-2: "the data spectrum is very broadband, ranging from DC to 5 GHz"

USB 3.0 Interference - Cradlepoint Knowledgebase https://customer.cradlepoint.com/s/article/NCOS-USB-3-0-Inte...

"USB 3.0, or SuperSpeed USB, uses broadband signaling that can interfere with cellular and 2.4GHz WIFI signaling. This interference can significantly degrade cellular and 2.4GHz WIFI performance. Customers using cellular networks or 2.4GHz WIFI networks near USB 3.0 devices should take measures to reduce the impact of these devices on their network connectivity. Please note that interference is generated by both the actual USB 3.0 device as well as its cable."


From my own experience, it is very hard to engineer a device that can run WiFi full speed along with USB3, especially something like a smartphone, or tablet, where USB3 pins come out right out of the SoC.

In fact, we tested existing laptops on the market for WiFi/USB3 coexistence, and only 1 laptop ever made it flawlessly. It was a quite old Sony Vaio, where USB3 lanes were physically put under an RF shield from controller to the port.


Is this a uniquely USB3.0 problem? I've been in the consumer electronics spaces for 10 years & from the very beginning I would hear reports about problems for 2.4Ghz WiFi any time you were transferring data on USB (& usually EEs try to tackle this with shielding if USB + WiFi coex is important). I have not heard of this as uniquely new to USB3.

I have never heard of interference with Iridium & GPS for USB2/3 and I worked on a team that was responsible for GPS at Apple, but it's entirely possible there's shielding needing to account for this & I just wasn't closely involved in it. If this were actually an unsolvable problem though, I would expect CarPlay & Android Auto would have a problem when you use your phone for navigation & start playing audio through USB. Maybe that's not enough traffic frequently enough to generate the noise needed to make it unusable, maybe the EE problem you were having was different, or maybe there's just good shielding in phones to avoid this as a problem.


> Is this a uniquely USB3.0 problem?

https://usb.org/sites/default/files/327216.pdf

Paper by Intel about USB 3.0 interfering with 2.4 GHz WiFi. The fact that the USB-IF hosts this paper on their own site seems to be a strong endorsement of their conclusions too.

It is worth noting that the RF noise generated by USB 3.0 covers a fairly wide spectrum from basically zero up to the mid-4 GHz range, but it gets discussed mostly in the context of interference with 2.4 GHz because most USB 3.0 capable devices don't have any other radios in that range to interfere with.


I didn't say USB3 doesn't cause this interference. I was saying I don't recall this being a uniquely USB3 problem (unfortunately all I have is my memory to go on here that we had desense issues with USB+2.4Ghz when I worked on the Palm Pre & Jawbone Jambox).

Given the report is 8 years old, I find it hard to believe this isn't an active thing that's designed around. Hell, you can buy USB3 WiFi dongles that do 2.4 Ghz. If this were a critical flaw that's unsolvable such a device wouldn't be possible to build. Even Table 4-2 indicates different mouse models have different performance characteristics, indicating not all 2.4Ghz devices are similarly sensitive to the problem.


My Lenovo 2.4 GHz mouse and MS Sculpt keyboard suffered from interference when the receiver dongle is plugged directly into a USB 3 hub with other USB 3 devices, and this is a well known issue. I fixed it by buying a shielded usb cable to move the receiver away from the interfering USB 3 hub and closer to the keyboard/mouse themselves.


Same issue with logitec mouse, mbp, usb3 hub dongle, buying an old school 4 port usb2 hub and running the cable away from the dongle, problem solved. Gotta love the future


It's not just USB 3, Thunderbolt 3 also has an interference problem. I had to get a 1 foot extension cable for my Logitech Unifying receiver plugged into a CalDigit Thunderbolt 3 dock to prevent my mouse from freezing. My wife had to do the same with her Unifying receiver plugged into a LG 5k Thunderbolt monitor.


RF coex will always be an issue, and it's especially bad when you bury a dongle with a tiny inefficient antenna in a mess of shielded high speed data cables (thunderbolt, displayport, ethernet, USB, etc.)


I had to do the same with my MS sculpt keyboard and Lenovo wireless mouse.


For years now I've used a logitech wireless dongle paired to mouse+keyboard.

During all those years the only way I could avoid random drops is by connecting the dongle to usb 2.0 ports.

I've had this issue both on a old dell l502x laptop and also now on my current asrock x370 motherboard which doesn't even have native usb 2.0 ports.

Sad to know this is actually a design issue, and thus may repeat on any computer I acquire in the future.


your dongle wouldn't use the USB 3.0 SS lines, they would be inactive, so it is not active interference that is the problem here.


I knew it! I had a 'gaming' type very comfortably shaped mouse but it used a specific 2.4ghz adapter. Any time I did a large file transfer over USB3.0, any flash drive replicated the issue, the mouse would become jumpy and unusable plus WiFi speeds dropped to maybe 10% for the duration of the transfer. I ended up giving the mouse to a friend and going back to my first gen MX Master.


Also, if the mentioned "key" used on each end is well-known or the xor approach is not cyptographic in quality, then anyone could listen to this EM output and compromise your "hard wired" data which you'd (forgivably) expect to be secure, a la TEMPEST. Depending on the strength of the signal, could be a serious problem.


Way back when, I thought I remembered that usb3 or thunderbolt was going to be an optical interface, but I guess that was too tough in practice. Would have been nice though to avoid EM interference issues.


I think it was Corning who put out a 200m thunderbolt 2 fiber cable


They exist but are fairly expensive and are passive.


Experienced this first hand. Had a crappy WiFi connection at home, started to investigate and found out that signal qulity degraded a lot when my Seagate SDD was connected on USB3.


My desktop has really flaky WiFi. Its WiFi adapter and a few peripherals are connected using USB3 ports. I just thought I was too far from the router (it's a room away through two walls). Is it possible switching them to the 2.0 ports could make my signal better?


Maybe. Although switching wifi router into something like 802.11b-only or a-only mode and selecting a fixed least busy channel might be a better long term solution, as USB3 isn't the only source of EMI.


Why exactly do you recommend archaic WiFi standards that are long gone?


It's a simple way I know that significantly improves reliability of wifi in most circumstances.


I challenge that 802.11b is more reliable than 802.11n, which offers MIMO. But even if that was true, the speeds of 802.11b are abysmal for todays standards.


I'm not aware of USB interfering with anything 2.4Ghz. If you're on 5Ghz I've not seen USB2/3 coex be a problem. Unless your dongle vendor did a bad job, USB dongles should be designed to minimize the impact of this interference because all they do is WiFi + USB (i.e. shouldn't matter if you are on 5 or 2.4). That being said, it's a simple thing to try so why not?

Does "flaky" here mean connection going in & out vs inconsistent & bad speed?

Assuming we're talking about speed rather than the connection itself, two walls can be a challenge if you're on 5ghz depending on the quality of your router & dongle. Do you have any old devices on your network? Your WiFi router will run at the speed of the slowest client. Can't recall if camping is a problem - I'm less clear about the details here, but it should be. If your 802.11AC dongle is on the same frequency as 802.11N, performance will be degraded off the bat so consider moving those devices onto a different frequency if you have a dual-band router (e.g. wireless printers or older wireless TVs can be causes).

If the connection itself is flaky, check if the RSSI is really bad. Anything >= -70 dBm is good & anything <= -80 dBm is pretty bad. Note the negative sign. -90 is worse than -80. If you're on 2.4Ghz you could also be having issues from poorly shielded electronics (e.g. old microwaves) or cordless phones.

Hope this helps!


> I'm not aware of USB interfering with anything 2.4Ghz.

https://usb.org/sites/default/files/327216.pdf "USB 3.0 Radio Frequency Interference Impact on 2.4 GHz Wireless Devices"


Please give a generous reading for what I wrote. Off-hand, I was referring to any WiFi dongles he would have likely bought. Not an 8 year old report of problems encountered at the time that's not even exploring self desense as a problem.

Yes, USB 2 or 3 can cause issues for 2.4 Ghz if you don't shield your components properly. Are you saying that WiFi dongles, whose only job is to connect to these networks, isn't shielded properly?


I had to wrap a USB 3.0 cable in tin foil once when I realized it was causing the issues I was having with wifi


We are using Intel Realsense cameras in our product. Sometimes the USB connection fails completely, sometimes it is USB2 only and the camera cannot work. Sometimes a reboot helps, but sometimes power cycling is required. This is without plugging the cable, plugging adds its own unreliability. From reading the product support forums we are not alone.

As a SW engineer my interpretation has always been that USB3 speeds are just too high to work really reliably with consumer grade hardware. This article gives technical details that tell me I have been correct.


> As a SW engineer my interpretation has always been that USB3 speeds are just too high to work really reliably with consumer grade hardware. This article gives technical details that tell me I have been correct.

The last section of the article is the most relevant: the hardware, tools, and documentation for USB3 are of mediocre quality. The whole history of moving bits over cables is adding and adapting tricks for getting more and more data through, we’ve always been at the point where poorly implemented solutions would cause problems, and we’re several orders of magnitude away from it being possible for a novice to implement a hardware and software solution from scratch (bit banging a serial interface on a microcontroller GPIO is possible for a reasonable person to do in a week to achieve less than a megabit connection). Super fast data rates are all over and most of them are very reliable, USB isn’t up to the same quality. You can make any speed reliable and fool proof, you just have to do it.


+1. It's true that designing an USB 3.0 PHY/controller is a serious challenge, and bad signal integrity in PCB design can cause many problems, but almost nobody implements one's own USB transceiver or controller from scratch. They're all standard off-the-shelf chips. If the PCB design is careful and a good cable is used, Layer 1 should be relatively stable.

A large number of problems appear to be bad hardware compliance, documentation, firmware or driver. For example, I have a PC with an early Renesas USB 3.0 controller. Once in a while, the controller could become completely unresponsive. The reason was that the controller has a firmware bug, and you can actually get some reverse-enginnered firmware update tool on GitHub to fix it.


Bad hardware compliance is only part of the problem. With higher speeds all the user facing stuff, like cables, connectors, power without noise - all start to matter too. You know how 100 mbit ethernet works well even with crappy connectors and crappy wires, you literally don't need even a twisted pair for it, but pushing gigabit ethernet over that doesn't work reliably anymore or even at all. It's very much a universal problem with higher speeds. And the notion that "you can make any speed reliable and fool proof" is just absurd denying physics.


Well, you can use fiber optics. For their VR headsets Oculus sells 5 meter USB cable that uses that internally with optical<->electrical transformers inside the connectors.


> But pushing gigabit ethernet over that doesn't work reliably anymore or even at all. It's very much a universal problem with higher speeds. And the notion that "you can make any speed reliable and fool proof" is just absurd denying physics.

I'm not denying physics. Funny that you use Gigabit Ethernet as an example... I'm still debugging a hobby Gigabit Ethernet project. For some complicated reasons, It has to use a standard BASE1000-T PHY, but instead of communicating over a CAT-6 cable, it has to communicate over the circuit board backplane. Previous prototypes couldn't even reliably auto-negotiate. Apparently, in Ethernet, not only a long cable and heavy attenuation can be the problem, an extremely short link with low attenuation can also confuse the "Feed-Forward Equalization" DSP algorithm in the PHY and create problems. I totally understand the inherent challenges of high-speed designs and have some hand-on experience.

But I have to say, yes, you can make any speed reliable, given a controlled environment and a reasonable technical constraints. It's called engineering. Howard Johnson (the author of the famed engineering handbook High-Speed Digital Design - A Handbook of Black Magic), in a 1997 article on Gigabit Ethernet, which he heavily involved in its standardization, he discussed its enormous challenges [0]...

> Key ingredients in any Gigabit Ethernet design will include:

> Careful control of clock skew on the 125-MHz 10-bit bus.

> Attention to crosstalk between massive, 125 MHz TTL-level parallel buses and critical high-speed serial circuits.

> EMI considerations on the serial cables (if you are not an expert in this area, I suggest choose fiber)

> Huge pin counts, especially on complex switching or routing ASICS, some of which will be using 512-bit bus interconnections

> More than usual emphasis on time-to-market; everybody will want their products ASAP

> If you thought Fast Ethernet 100BASE-T layouts were tricky, think again. Gigabit Ethernet is going to be faster, with more parallel signals, and tighter layout constraints.

However, he also said,

> It will work, and it will work reliably, but you will have to follow the rules.

You can make any speed reliable within a reasonable engineering constrict, and in a controlled setup. Using off-the-shelf PHYs and controllers, doing hardware compliance testing, and installing good cables are all supposed to create this controlled setup. Failures to do so is what we call "hardware compliance problems". I agree that it's definitely not foolproof. Saying it is would be denying physics, but I didn't say that. EMI/EFI is a serious problem for USB 3.o. I'm only saying that a lot of USB 3.0 problems in practice is bad hardware compliance, true electrical problems can be fewer.

[0] http://www.sigcon.com/Pubs/straight/gigabit.htm


I've had nearly the same experience! I was working on a project for a robotics class and my team could not figure out why our object tracking went from 30fps to ~5fps seemingly randomly. It turns out the USB3 cable wasn't fully plugged in and couldn't make full contact so it was running on USB2 and didn't have the bandwidth to send the video at 30fps.

It took a while to debug because I was absolutely convinced the slowdown was a software or GPU issue. After all, wouldn't you expect most digital ports to be binary: either be plugged in or not? It turns out with USB3 there's a third state.


> After all, wouldn't you expect most digital ports to be binary: either be plugged in or not?

No. USB-A plugs for 2.x have 4 contacts that extend close to the outer edge. USB-A plugs for 3.x have additional 5 contacts close to the inner edge. If you don't insert the plug fully, you physically just get a USB 2.x connection. Nothing magic there.

https://www.techspot.com/guides/235-everything-about-usb-30/

There are more subtle problems with 3.0.


> it was running on USB2 and didn't have the bandwidth to send the video at 30fps.

Mmm, I am surprised about that because USB 2.0 is fully capable of 1080p60 without issues. I have Logitech C910 and it does 1080p60 flawlessly. I wonder if it is not actually 2.0 but USB 1.1 which made sense since the transfer rate is abysmally low (1.1 does up to 12 Mbps, 2.0 is up to 480 Mbps).

But again, computers does have their quirks and can do weird & strange things that seem impossible but it is possible.


> Mmm, I am surprised about that because USB 2.0 is fully capable of 1080p60 without issues. I have Logitech C910 and it does 1080p60 flawlessly. I wonder if it is not actually 2.0 but USB 1.1 which made sense since the transfer rate is abysmally low (1.1 does up to 12 Mbps, 2.0 is up to 480 Mbps).

I am not who you were responding to, but I can confirm that at 480 Mbps our Intel Realsense does not work, 5000 Mbps are mandatory. We use an Infrared stereo mode at 30 fps. I am not a my desk now and don't remember the exact details by heart.


>USB 2.0 is fully capable of 1080p60 without issues. I have Logitech C910 and it does 1080p60 flawlessly

mjpeg compressed

https://telestreamforum.forumbee.com/t/6363z0/psa-windows-10...


> After all, wouldn't you expect most digital ports to be binary: either be plugged in or not?

It's always a good idea to have a known-good extra cable or two and laptop that you know has good ports. All the time and frustration you will save is definitely worth the hassle.


Full HD raw image data nets you about 5.something FPS on ~480 mbit/s USB2, which already hits physical limits. Higher standards basically just increase the number of data lanes, but we still call that serial.


Apple/Intel Thunderbolt does not have this problem in my experience. USB has always been the cheapest alternative (remember Firewire?). Add to that the sheer incompetence of the USB committee (see the USB PD spec, or USB Audio) and its a miracle anything works at all.


> Add to that the sheer incompetence of the USB committee

In contrast to the sheer idiocy of demanding a "pocket change" fee of $1 per port, which made FireWire an instant turnoff for most device makers not targeting Western markets, and singlehandedly killed FireWire as a standard.

FireWire was possibly better engineered, but USB was better marketed.


> FireWire was possibly better engineered, but USB was better marketed.

Yes, perhaps, but it had a big drawback. You could not build a Firewire device with a single die because of the high voltages. This added cost to the USB device (memory stick, etc) meant it would never be price competitive with USB. I was a member of a USB working group and was amazed when a member from a large Japanese company explained that to me. Up to that point, I had not been exposed to the low-end (as in price) of the USB market and didn't realize the significance of this issue. These manufactures would bring the 5V straight into their IC and drop the voltage on-chip. That is actually fairly difficult in the age of such small chip geometries, but basically impossible for Firewire which is 8V to 30V. The USB market is driven by the low-end, so Firewire never had a chance because of an engineering specification.


This isn't brought up enough when people talk about FireWire and "good engineering". Good engineering means practical engineering. It means things like choosing a sane, common voltage that is well suited to the already-existing real world. Picking 8V-30V is the kind of decision that sounds beautiful to a utopian engineer working high in an office tower, and I'm sure it would be lovely if we could have instantly warped into a world of full adoption, but here in the real world your standard is going to have to be useful today, and it's going to have to be useful with more than just brand-new top-of-the-line freshly designed kit. Any engineer who would willingly forgo access to the entire world of already-existing 5V ICs is not a good engineer in my books.

To be fair to them though, it's very easy to get caught up in the beauty of those theoretical designs and forget that the best-selling car is a Toyota Corolla, not a Lamborghini.


To nit-pick your analogy: is a Lamborghini really “engineered better” than a Corolla, in systems engineering terms? (I.e. the sort of terms a NASA program is budgeted in, when you have to consider the costs of potentially propping up your suppliers in the face of them collapsing, training your future employees to pass on institutional knowledge, etc.)

Or, to put that another way: is the total profit per Lamborghini sold really higher than the total profit per Corolla sold, after the CapEx and OpEx of any maintenance + spare-parts supply logistics + recall costs + training + supply contracts + etc. are taken into account? (I would assume, if it were, Lamborghini would have a higher market cap.)


I think that actually strengthens my analogy (and was really my point).

A Lamborghini is full of flashy, pie-in-the-sky engineering that often ignores practicality in favour of flash and performance. On paper, it's a beautiful machine, but much like in the case of FireWire, it doesn't meet eye to eye with the real world around it.

The Toyota is less ambitious, more constrained by reality, and while it certainly doesn't match the on-paper specs of the Lamborghini, it is in every way more practical and viable as a system.

I would consider the Toyota to be the better engineered of the two when looking at them as cars and not as toys, even though it lacks the technical wow factor.


I always understood Firewire to be a standard designed for complementary use-cases to USB. Firewire isn't an attempt at "USB but better" (though devices could theoretically be made to use it that way); it's an attempt at "powered SCSI, but smaller and more ruggedized." It's a professional-equipment port, designed—on the one hand—for devices like laptops that can't fit the older, full-scale professional-equipment port (like SAS or mini-XLR); and on the other hand, for manufacturers of high-BOM high-margin professional equipment, as an extra port they can slap onto their device to allow these portable devices to connect to them.

In that sense, Firewire is less analogous to a Lambo ("a better car"), and more akin to a truck. But a very tiny truck, designed for both off-roading and portability. Which is to say: an ATV.

An ATV isn't better at being a car than a car is, but it is better at being a truck than a car is (though worse at being a truck than a truck is.) But sometimes, especially in constrained environments (e.g. search-and-rescue in the Alaskan wilderness), a truck won't fit, and a car won't work. So you need an ATV. It's well-engineered for that particular set of constraints.

Which is also the deal with Thunderbolt: it's PCI-e, but smaller and ruggedized, for places PCI-e itself wouldn't work (like a laptop), but where you nevertheless still need PCI-e -alike performance and bus semantics.


FireWire actually predates USB, and was certainly seen as a competitor during the design process for USB. It was originally drawn up as a general-purpose serial bus, but with a particular eye at replacing external SCSI interfaces. Certainly it's designed to interface with all sorts of professional equipment, but even in this space that particular decision is a poor one because of the universality of 5V components. Early FireWire-equipped laptops varied wildly in actual voltage delivery capabilities, and even Apple's generally capped out at around 9V, and the standard is extremely awkward and expensive to implement on the device side for many classes of devices. The main issue is that devices have to be capable of handling unregulated voltage between 8V-30V. For high-end professional equipment this is obnoxious and expensive, but manageable, but it's a complete nonstarter for nearly all other equipment, and that meant that even as the price of implementing FireWire signaling went down, it couldn't expand into a bunch of spaces it was perfectly capable of working in (including lower-end versions of said professional equipment).

Even including an optional 5V power pin would have improved this situation, and could have put FireWire in a similar situation to Thunderbolt 3 now, capable of functioning with both high-end professional equipment and ordinary consumer devices.


Joke's on FireWire in the end though, USB-C ended up with the abomination that is USB-PD plus a literal shitload of incompatible other power negotiation schemes (Qualcomm and Apple being the most notorious).


So you give the technical reasoning to my long-standing intuitive claim: USB3 is too fast for consumer-grade hardware to be really reliable.

With engineering solutions like Firewire it could have been as fast and more reliable. But more expensive, i.e. not mass market or consumer friendly.


Yes, they were delusional. Economics trumps engineering, always.


definitely. Accounting will almost always win the fight unless what they want completely renders the product useless. Apple is one of the few companies I've worked for where that wasn't always the case.


Yeah, we remember FireWire. And all the burned out FW ports due to poor power regulation on cameras and devices. I don't think I've ever seen a connection protocol more likely to damage a device on either end of the connection. USB is a massive imporovement here.


I remember powering two 3.5" 7200rpm from a single firewire port for years. Even a third disk daisy chained from a single port didn't require an external power supply. I miss the user experience made possible by the high bus voltage. USB PD is worse.


FireWire is a much better design than USB in many ways. I still use a FireWire audio interface purchased not that long ago (though now it has to go through a Thunderbolt adapter).

The problem with USB 1 and 2 is that it was designed by insane committee. This resulted in stuff like those versions not using true differential signaling (like USB3 does) even though that was common technology at the time. Instead they have end of packets delimited by single-ended conditions. That, coupled with a single ground wire for power and data makes for a world of ground loop issues.

And then they made the frame rate 1kHz, which is right in the peak of human hearing sensitivity. And so, USB's terrible design is single-handedly responsible for a huge amount of, even most, noise and interference issues in audio systems involving it. Especially in low cost consumer hardware, but it has made its way into many professional productions. Any time you hear a constant 1kHz beep of interference, mixed with some fuzzy crackling, nine times out of ten that's a ground loop with USB and audio involved. It's everywhere once you listen closely. Headphone output on your laptop or phone is noisy when you access a USB drive, or has a 1k beep? USB. USB audio widget has background fuzz and a beep? USB. Slight 1k beep in the background of a news cast? Someone had the bright idea of using USB somewhere along the line.

It didn't have to be this way. Ethernet, SATA, plenty of other protocols use proper differential signaling and don't have this issue. How to do this properly was well established long before USB existed.

And to add insult to injury, while USB 2 specifies a transaction translator, so you can run USB 1 devices on a USB 2 bus, USB 3 does not. And so, while USB 3 devices don't have this problem any more, you can't plug in a USB 2 device into a USB 3 bus or a USB 3 hub. And nobody uses USB 3 for low channel count audio interfaces, so this problem will remain forever. Oh sure, you can plug it into a USB 3 port, which is actually a USB 3 port and a USB 2 port in one. And you can plug it into a hub labeled "USB 3" which actually has a whole USB 2 hub along side it in parallel. The communication from USB 2 devices remains USB 2 all the way to the host. And so, not only do USB 2 devices only get 480Mbps of total bandwidth on a USB "3" hub (minus a ton of overhead, because the protocol is terrible), it doesn't fix this issue! And did I mention USB 2 is impossible to galvanically isolate in any kind of cost-effective way?

I could go on and on about how the polled architecture causes performance bottlenecks, about how it's impossible to tunnel without unfixable corner cases, about how its latency requirements preclude lots of innovative ideas, about how the descriptor and device inquiry protocols are practically designed to guarantee that implementations are vulnerable and have exploitable bugs (yes, your USB stack is vulnerable. They all are. This is how the PS3 got hacked. And the Switch. And tons of other devices. USB developers like me know misbehaving USB devices crash hosts all the time).

I don't like USB.


Someone actually hacked together a TT for USB3. It's unobtainium for mortals but it exists https://www.via-labs.com/product_show.php?id=96


I'm aware; as you say though, sadly, it's unobtainium (and non compliant, so it's guaranteed to have compatibility problems with some devices and drivers).

Real transaction translators in USB 2 work with host cooperation and specific support for USB 1 devices in the protocol, while this is trying to do transparent protocol conversion. Sadly, due to USB's design, this runs into the same corner case problems as any other kind of tunneling.

The main problem is the polled architecture. Host asks device for data; translator has to reply immediately so it has to say there is no data yet and ask the device for data. Translator buffers data and delivers it to the host the next time it asks. Except the host is allowed to never ask, again. Now the translator has data it has to drop on the floor. Not good. And this all requires heuristics to decide what to do. It's a mess.


As a public service:

TT = Transaction Translator


These factors don't come close to negating the incredible convenience that USB has afforded me over the last couple of decades. So, the committee must have done something right.


Just because a technology is better than nothing or became popular doesn't mean it's good. There is plenty of bad engineering in the world that might be better than no engineering, but that doesn't mean we wouldn't have all been better off with a good design from the start.


But it's not just better than nothing. It is good, even great. It does exactly what I need without issue and has worked reliably for me for decades. It may not be perfectly elegant and beautiful in every technical aspect of its design but that doesn't really matter to me.

You might say Firewire could have provided the same for me, but Firewire wasn't widely deployed or adopted. Therefore it could not have replaced USB for my use cases. That may not be a technical issue, but I don't really care as a user if the failure of Firewire is technical or not. Technical superiority is only a small piece of the puzzle. Plus, I think you are underestimating the technical advantages which led to USB's success (e.g. low cost).


> It is good, even great. It does exactly what I need without issue and has worked reliably

USB1 and USB2 still work fine despite being less than ideal engineering. Tolerances are big enough at lower speeds. USB3 can be unreliable (e.g. for cameras) because limits set by the laws of physics get too close for cheap implementations. That's no longer good. Especially if we are forced to use that in professional set-ups, where a couple of cents or even Euros/dollars would not harm. The problem is now that you cannot put that tiny bit of money on the table and you get a reliable product. As the discussion showed even certified and known brands' USB3 products can be unreliable.


Some of these merits depends on DMA support on IEEE1394 but it's actually an security risk without separation by IOMMU that implemented on CPU since around 2012.


And the iPhone too... (checkm8 is... a USB stack bug)


The lack of isolation options is so brutal


> FireWire --> burned out ports

Sounds like it delivered what it said on the tin!


And USB being the cheapest option is the main reason why we finally have a connector and almost all hardware can agree on. Remember that even the fact that the original USB plug is not symmetrical is to due to cost savings - if they added four extra wires for symmetry, we might still not have USB everywhere.


> we finally have a connector and almost all hardware can agree on.

Do we?


Yup. It's going to be a bit before it percolates through to lower price point devices, but I'm a very, very happy usb-c/thunderbolt user.

My phone, headphones, drawing tablet, personal laptop, work laptop, and monitors all plug in using a single port.

Not just for connectivity, but also for power.

It's incredibly liberating. Everything charges from the same charger, everything plugs together.

I vividly remember the days of trying to find the right wall wart for the right device. I can't tell you how happy I am that my work laptop and personal laptop can both share chargers. That alone would make me ecstatic. That my phone/headphones/tablet also all work with just that charger is candy on top. That my monitor can provide that power while also using the same cable for display in? Incredible!

There are absolutely still issues, and it requires more research than it should, but once it works, it's SO DAMN NICE that I simply can't imagine going back.


Compared to what we had at the dawn of USB - yes, absolutely.


Any external Thunderbolt device also gains direct memory access to your system, resulting in a list of related security issues. Of course speed over security is classic Intel.


A few months ago, another HN user raised an objection to FireWire due to DMA attacks in another thread. I wrote a 1000-word response on why it's a valid but unfair criticism [0]. The same response is also applicable to Thunderbolt - DMA attacks are a symptom, not the problem (which is the lack of IOMMU).

[0] https://news.ycombinator.com/item?id=23991732


And even then, you might not want to activate the IOMMU, as the performance impact can be prohibitive in certain scenarios.

https://dl.acm.org/doi/pdf/10.1145/3230543.3230560 "For 64B DMA Reads it drops by almost 70% and even for 256B DMAs the drop is still a significant 30%".


At least a choice exists here. For an external port on a laptop, degraded performance is better than a disabled/unusable port, or a hacked computer. And for an internal PCI-E device, it can be turned off if risk is acceptable.


Right. I think for most people, if it cannot be remotely exploited, its a non-issue and a deactivated IOMMU is the right choice. Once you really have to worry about an attack vector where your adversary has physical access, its a whole different game anyway and just flipping on the IOMMU won’t safe you.


> Once you really have to worry about an attack vector where your adversary has physical access

There is crack the PC open physical access and just connect a device to a readily available connection access. With the later the adversary doesn't even need physical access himself, he just needs an unwary victim.

Smartphones had been plagued by something as simple as a public charging station getting compromised when Android was configured to just grant access to the file system and debug interface without requiring user confirmation. Before that Windows stopped auto playing software because it was used to infect systems from compromised devices.


Those DMA attacks can be blocked by the OS properly configuring an IOMMU. Intel stopped using IOMMU support on CPUs for product segmentation by the time Thunderbolt 3 arrived, and even before that it was difficult to find a Thunderbolt-equipped system that didn't have IOMMU capability.


> Those DMA attacks can be blocked by the OS properly configuring an IOMMU

Wikipedia seems to state no OS has managed to do so (as of 2019). Thunderbolt had been around since 2011.

> Intel stopped using IOMMU support on CPUs for product segmentation by the time Thunderbolt 3 arrived

So it couldn't possibly be made secure before 2015 and still wont be secure until $UNIVERSE_HEATDEATHDATE .


> So it couldn't possibly be made secure before 2015

You're wildly misunderstanding things. Up to 2015, it was possible to build a new PC that lacked the necessary hardware features, primarily by choosing one of Intel's enthusiast-oriented overclockable CPU models (which used to have the IOMMU disabled for product segmentation reasons). But it was still very much possible to get all the necessary hardware features by selecting the next slower speed grade of the same chip, or by buying almost any pre-built system equipped with Thunderbolt ports. And since then, it's been almost impossible to build a PC that lacks the necessary hardware security features, because pretty much the only Intel CPUs still in production that lack IOMMU capabilities are Atom parts which aren't socketed and probably aren't used in any Thunderbolt-capable machines.

> Wikipedia seems to state no OS has managed to do so (as of 2019). Thunderbolt had been around since 2011.

You should read more carefully, and preferably follow the link to the source paper. That paper points out how Mac OS X started using the IOMMU to protect against DMA attacks in 2012, Linux and FreeBSD likewise have such capabilities, Windows pretty much doesn't (no longer true).

The point of that paper cited on Wikipedia was to show that when operating systems poke holes in the IOMMU protection to allow certain devices to perform DMA, it's possible to open up a more exploitable hole than you might expect. In particular, data structures may need to be rearranged to align with the 4kB page protection granularity, or else information that should remain inaccessible to peripherals may be exposed alongside legitimate DMA payload. And it's also possible for a malicious device to spoof the device IDs of a trusted device type and be granted access by default, but that's a class of vulnerability that exists for plain old USB, too.

So as I originally stated, the IOMMU is the hardware feature needed to block DMA attacks. But it's only effective when used properly by the OS. That's not trivial—but also not impossible.


> Intel's enthusiast-oriented overclockable CPU models (which used to have the IOMMU disabled for product segmentation reasons).

That doesn't seem obvious to me. Is there evidence it wasn't just binning?


When parts of a chip are turned off for binning reasons, it's because those features take up a large area and those large units containing a defect can be disabled wholesale and still leave you with a usable chip. This usually manifests as disabling either whole cores, or substantial portions of last level cache memory.

Features like HyperThreading or IOMMU are integral to the design of functional units that aren't optional. Those features aren't implemented as separate physical regions of silicon that take up non-trivial space, so it's very unlikely that a defect could break HyperThreading without breaking the entire CPU core, or that a defect could break IOMMU functionality without breaking the entire PCIe root complex. Disabling features like this does not meaningfully change the fraction of dies that pass quality control.

The IOMMU in particular is probably not going to be affected by overclocking, because it's probably in the same clock domain as the rest of the PCIe IO stuff, and that clock domain doesn't usually get overclocked when the CPU or DRAM is overclocked. So there's no reason to expect IOMMU functionality to affect what CPU clock speed a given die can reach.


DMA stands for Direct Memory Access, and it is called so for the fact that the peripheral device directly read from or write to memory bus while CPU is not accessing it. So CPU internal states or protection are in theory irrelevant.

In a most rudimentary implementation, a device doing DMA is physically wired to the main memory in parallel with CPU. The device will set something like `data, addr, CS, WR = 0xCAFE, 0x1000, TRUE, TRUE` and send a clock pulse. Then when CPU comes back to check address `0x1000`, the data `0xCAFE` is magically “there”, so programs on CPU don’t have to waste cycles polling the device to get the value. Naturally, some arbitration is necessary because trying to change bus states from both sides will mess things up.

Above was probably the case until 1980s or 90s, and DMA in modern days instead goes through bus masters and memory controllers. Since it’s not going directly to physical RAM anymore, such controllers can be configured in such ways as to prevent unwanted accesses.

But it’s more of “can be used to protect against”, not “won’t be possible if done correctly”.


> So CPU internal states or protection are in theory irrelevant.

Go look up what IOMMU means. We're not talking about a feature inside an individual CPU core, but in the PCIe Root Complex that implements DMA capabilities in the first place; it's the thing that sits between peripheral devices and the DRAM controller. That functionality is on the CPU die these days.


I use thunderbolt gizmos for few things and would rather have the speed which I immensely enjoy. I can't live my life worrying about every step I make.


This has also been a problem with USB2 cameras. The signal rates reach physical limits. A 5m cable is probably too long for high speed applications, most camera vendors say that they support only 3m. But if you have a camera in any rugged device, even a bit of EM (electro magnetism) can ruin your connection.


What does EM mean in this context?


Based on the context I would guess it stands for electromagnetic interference


Electromagnetism. On the other hand, obviously any electronic circuit, including USB, is electromagnetic. A more accurate choice of word would be EMI (i.e. electromagnetic interference), or EMC problems (i.e. electromagnetic compatibility problems).


Probably short for Electromagnetism or short for EMF which is short for Electro Magnetic Field.


If it starts working again after a reboot it’s a problem that can be solved in software. It’s just hard so consumers instead have been trained to accept unreliability.


Consumers drive this by buying the cheapest device that just about works.


Manufacturers aide this by not making their devices apples-to-apples comparable and using brand specific marketing terms. Worrying about buying something from a better brand that is the same thing but marked up is real.


> We are using Intel Realsense cameras in our product.

That's an interesting domain. What are you working on? Any reason for choosing Realsense over Azure Kinect or other sensors?


As someone who was involved in a RealSense project I might be able to answer...

Firstly, RealSense cameras are really affordable. I think the cameras we were using were around $80 a piece. Cost was an important consideration for us because we were building a multi camera device and to do something similar with Azure Kinect would have cost us several thousand dollars.

Another thing we like was how great the Intel RealSense SDK is. They have official wrappers for multiple languages along with sample code to get you started.

The RealSense ecosystem is quite healthy too – at least for a niche technology. RealSense cameras have a proven track record in various commercial devices and the development forums are fairly active. We found we could find answers to most common questions and problems we had.

Another nice thing about RealSense is that they have a great range of cameras so they have something perfectly suited for every application. For example, some of their cameras use stereo vision while others use LIDAR or structured light. They also use the same SDK for every camera which makes swapping out cameras fairly easy. We were using structured light cameras, but we were able to swap them out with Stereo and LIDAR cameras with no problems at all.

Azure Kinect looks interesting. I've wanted to play around with one of a while, although given its price and feature set it wouldn't have been the right camera for us. It's probably a good all-rounder for general robotics / vision projects though.


I can always rely on our Intel Realsense cameras -- to magically transform themselves into USB 2.1 devices after a couple hours' use.


IMO: It's not the speed that's the problem, lots of consumer hardware has very fast networks/buses like ethernet. It's the huge number of states and the communication required to negotiate them.


I wish the industry would bite the bullet and come up with 'USB-hypothetical' based around a strictly enforced standard which used active optical cables with additional conductors for power.

An awful lot of USB issues would go away by getting rid of the need to push an electrical protocol too close to physical limits, enforcing sane minimum standards so end users have a clear idea of what a given port can provide (none of the current mess where 'USB-C' represents a connector that could support any number of protocols), and making layer 1 a dumb pipe (no special wires needed to support optional feature X) so functionality could be entirely software-defined.

USB is just too complicated for its own good, and has too many optional features, to be friendly to end users.


USB certification already exists, and consistently fails regardless of how hard, and arduous it is to pass, and the only option seem to make it actually work is to make it even more strict, which itself will be a turnoff to hardware, and more importantly, chipmakers.

Thunderbolt chips already cost an arm, and a leg.


USB is a good example of why testing should be hard and should be wholly dependent on the problem and not at all on the feelings of the manufacturer. You don’t see 100G Ethernet links dropping out because of a crappy cable.


Optical cables are not robust enough for the use as everyday cable.


Today's singlemode is quite abuseproof. Saying this, while having one cable in the office which was ridden over by carts, stepped, and stumbled on for years. I can't imagine UTP5/6, or especially 7 surviving this.

I'd say a purpose made, abuse proof optical cable is better than high frequency copper cabling.


How abuse-proof are the optical connectors on those? Can you carry devices and cables in dusty bags all day and not worry about getting dirt in the optical path?


Yes, I think connectors are now the weakest part of an optical cable.

Singlemode's biggest weakness is alignment, optics, and a big change for a single dust particle to do a double digit decibel drop.

Either terminations will have to be done with some abuse proofing ferrule with a built-in lens, or every optical cable have to be active, with a transmitter glued to it.

The later in comparison to first, solves the issue in its entirety, and is not that much of a crazy proposition today.

Even the most fancy laser chips like EAM, and MZI modulated ones, can be made for cents if the competition will made vendors to drop prices.


>The later in comparison to first, solves the issue in its entirety, and is not that much of a crazy proposition today.

Right - you can buy these right now: https://www.corning.com/optical-cables-by-corning/worldwide/...


And how much more do they cost in comparison with hooking a few copper wires to a simple copper plug manufactured in china?


$30-$40 wholesale in China, but...

1. Single mode fibre already costs less than high high frequency cabling.

2. A standard written from scratch can move all smart electronics, and even some passives from the transceiver into the device.

3. The moment a transceiver turns into a standardised single chip device, economies of scale turn enormous.

4. Optics can be simplified for direct attach devices.

5. Cheapest transceivers sets + cabling for single mode 10G ethernet already cost less than $10 wholesale.


Obligatory xkcd: https://xkcd.com/927/


> To reduce the amount of harmful interference generated, USB3 links use a technique called scrambling, in which data is XOR'd with a fixed pattern before transmission.

So I can take that pattern, repeat it a few million times, and then if someone transmits it (as simple as saving it to disk), it turns their USB devices into transmitters.


Correct, and these attacks exist and will cause a link to go down. It’s a problem that was considered in the design of both 10G ethernet and PCIe which both use similar mechanisms.

Generally a key defence is the maximum packet size (you can’t keep your evil stream aligned with the scrambler) along with the huge pattern length (for PCIe g1/2 and USB3, 10GE and PCIe g3 works differently) which means that the amount of data you’d have to send is just too large.


Probably too much work to do in that context but QR codes solve that by having multiple possible XOR patterns and choosing the one that generates the best output.


Actually not too much work. Some protocols (I can’t recall which, none of the above) work by sending either the data or its inverse along with an extra bit saying whether it was inverted. Very easy in hardware but has nasty error amplification (an error in that bit causes many other bits to flip), which may or may not be an issue.


Not sure how that really helps but it does mean you have to look at all of the data before sending it.


You split your data into chunks, say 8-bits. The main thing is that you can guarantee there will be a transition every N bits and you can guarantee that the average number of ones and zeros is the same (you can't have a DC component in your signal due to AC coupling).

For example let's assume that the block size is 4-bits, if somebody wants to send 0x0000 rather than sending this:

0000_0000_0000_0000

You'll instead send this:

10000_01111_10000_01111


Only if certain other conditions are met.

The PCB routing and cable do not have to not be tightly coupled differential pairs (this is almost never seen in practice).

The data you’re trying to transmit falls within the line code. Keep in mind only 1/4 of all possible bit sequences are even available in 8b/10b.


sounds sort of like grey codes in a different domain.


I keep ending up with USB3 equipment connected through a USB2 port because it won't work in USB3 ports. It's not often, but common enough to be annoying.

An interesting question is if the USB2/micro-USB connector era was peak USB and that we're going to see more varied and differing implementations in the future.


Getting an Oculus CV1 VR headset working via my mobo's USB was fraught with issues (4 things tyring to run at USB3 speeds). Eventually gave up and bought a known good PCI card.

Though I think this was more to do with controller bandwidth than the physical layer.


I've connected popular bluepill board to usb3 on my computer, effect - device works, but is damaged in such a way that it draws additional 200mA (even without usb connection) and uses it for heating. Fixing it requires soldering new chip.


I gave up trying to get my HTC Vive Pro camera to work over USB3.


A lot of these issues remind me of twisted pair Ethernet. It's interesting that Ethernet is usually coupled with transformers, but USB3 is typically done with capacitors. Can someone more knowledgable comment on the relative difficulties and problem points in 10Gbe vs. USB3?


10GBASE-T has a different story to USB3.

10GBASE-T consumes too much power (2-5 Watts!) and generates too much heat for a single port (a transceiver can go up to 90 celsius!) , so it is not suitable for most of areas, including for home NICs (requires spaces for big coolers) and for enterprise switches (ports cannot be densely placed [1]). But I haven't heard of reliability concerns that USB3 has. Well, they are designed to work up to 100 meters so they should be reliable. Maybe it's all because of the transformers you mentioned?

Therefore it's very common to use alternatives like optics and simple DAC copper cables for 10Gbe. I also think home networking industry will eventually give up 10GBASE-T.

[1] https://wiki.mikrotik.com/wiki/S%2BRJ10_general_guidance#Gen...


Ethernet also has 802.3az, so the link speed can be dropped when the speed isn't needed. I'd wager that most home and SME applications are going to be perfectly happy just being able to burst to 10G for a few minutes at a time, and won't need to sustain that speed.


I've been using Infiniband at home (cheap second hand equipment on eBay from universities and such) since before 10G was affordable, and loving it.


I recently picked up a pair of connect-x 3 cards plus a cable for 30$ total, and it’s been absolutely awesome so far.

Infiniband for homelabs is a really underrated choice, if you can source cheap cables.


I recently bought ConnectX-3 too but not arrived yet. I have been thinking to use Infiniband bit I'm afraid to use it since I heard that it works completely different from Ethernet and I am zero knowledge in that.

How's your experience in Infiniband? Do NFS/SAMBA work well without significant works?


If you use IPoIB, well of course nfs will work, it's just IP. But most Mellanox cards work in Ethernet mode just fine, why bother with IB?


Then what are the use cases of IB at home? I mean, the comments talk about IB at home, so I am really curious.


It's primarily for homelab rather than pure home use, for people who are doing professional computing at home. The underlying use case is the same as in a professional environment e.g. SAN fabrics, clusters, etc.


>>10GBASE-T consumes too much power (2-5 Watts!) and generates too much heat for a single port

I mean, I have an Asus 10GbE pciex card in my workstation that works at the advertised 10 gigabit speed using a 10GbE switch, Cat6 cables and another workstation with the same card - what am I missing?


Of course, they are available for tech enthusiasts like you. But look at your Asus 10Gbe card with a passive cooler [1], and I bet your 10Gbe switch gets pretty hot or you are using a big switch. They have thermal/space limitations - they are not good for average consumer devices.

[1] https://www.asus.com/us/Networking/XG-C100C/


I mean we have it at work because we very frequently copy 100GB+ builds across machines - and yes, the switch that we have is only 8 ports but full 1U size, and it is actively cooled.


The original 10gbe copper standard is 10gbase-cx4, but lost out apparently.


10GBASE-T uses four pairs full-duplex for 10 GBit/s total BW per direction. USB 3.0 uses one pair per direction for 5 GBit/s. The analog bandwidth of USB 3.0 extends to 3+ GHz. 10GBASE-T only goes up to ~400 MHz or so.

Running at much lower frequencies makes it reach much farther, makes it more immune to EMI and makes it more reliable (because the margins are much fatter when you are only using 10 % of the possible minimum cable length).


How is 10GBASE-T able to get so much data into 400 MHz, when USB 3 had to go to 3 GHz to achieve the same thing?

Other than the fact 10GBASE-T buyers are ready to spend a lot more per port :)


USB 3 is a pretty plain 8b10b scheme. 10GBASE-T uses a far more complex modulation scheme which uses forward error correction, then applies a PAM-16 modulation and then packs two PAM-16 symbols into a 2D constellation of 128 symbols called DSQ128 (curiously two PAM-16 symbols would encode 256 values, but DSQ128 only contains 128).

Four pairs vs one pair already quarters the bandwidth requirements on top of that.


Is there a reason that USB isn't adopting those techniques? Too much compute required? Four pair cabling is prohibitively more expensive in this niche?


Capacitors are cheaper and smaller, and you don't have runs of USB long enough to really need proper galvanic isolation.


Yes, pretty much it.

Miniature magnetics are a thing now, possibly provide better signal integrity, but they are still are purpose made parts, and they were not around the time of first USB3 drafting.


USB 3 and the idiotic USB C standard can die in a fire.

Replace it with a stripped down version of Ethernet with a simple, sane protocol that can encapsulate ethernet frames, display port frames and PCIe frames. Something like firewire where we can memory map a device or perform bulk transfers or byte oriented transfers.

Maybe a "converged" controller with some brains can allow it to handle Ethernet frames directly so an Ethernet port or dongle isn't much more than an external MAC. Then we can use existing Ethernet interface hardware, MAC's, jacks and so on. A new connector would be nice too. One that doesn't have the ability to inject 20V into a 5V device. Real-time can be handled using TSN and PTP can also be used.

Finally, combine that with single pair Ethernet for local desktop device networking at up to 1GB with PoE over a single pair of wires. Why do we need USB 3 and C again?


> Why do we need USB 3 and C again?

Typical copper ethernet is 1Gb/s, while USB 3 is 5 Gb/s, going on 10/20/40 Gb/s with USB 3.2, USB 4 .. thunderbolt.

As others have said, 10G ethernet is too power hungry, and even 1G ethernet is inherently more expensive. I resent active cables and do prefer ethernet, but it's not a realistic proposition for most use cases.


> Typical copper ethernet is 1Gb/s, while USB 3 is 5 Gb/s, going on 10/20/40 Gb/s with USB 3.2, USB 4 .. thunderbolt.

All the bit rates you listed Ethernet also covers. And as for copper - at the speeds you listed thunderbolt and USB3/C are limited to severely short cables, a measly 0.5 meters for 40Gb. Though 20Gb can do 1 or 2m, 2m being the longest thunderbolt cable you can use.

As for power, that devil is in the interface details. I see no reason why multi-gigabit Ethernet can't be pushed over a similarly designed low power differential copper serial transceiver.

As far as I'm concerned, Ethernet is more mature than USB could ever dream of being.


Ethernet is more mature, I like it. But to compete in most USB use-cases it would have to re-engineered so much, it wouldn't really be ethernet any more.


You don't think we need to change ethernet so much as to augment it with a co-processor that handles the "bus" aspect such as detecting when devices are plugged/unplugged and interrupt handling. Ethernet is just another serial port with higher level mechanisms which handles framing of the data. We just need a little more brains to handle the bus aspect and a simple protocol to talk to devices with as little overhead as possible.


Not just "bit" harder, but much, much.

I constantly see "certified" USB3 gear failing.


Keep needing to explain this one to clients... sure, I think I can get it to work... it'll be a lot trickier.

HDMI though, the routing examples for that one are stunning.


Doesn't HDMI also have dedicated pins for Ethernet?


Not literally, no, it's carried along with other data.


And the way how that is accomplished is depending on your view either brilliant hack or one giant ugly kludge. There is a pair of wires in the cable onto which two 100-base-T ethernet pairs and digital audio is combined using hybrid transformer and phantom.


Does USB4/Thunderbolt will change the story? I heard their signaling is quite different than USB3.


Yes it will, making it even more complicated.

Signal integrity minimums for thunderbolt is more strict than USB3, and it is a bigger EMI issue, but at least the new standard mandates mandatory shielding.


Do these problems also apply to USB C?


USB C is the shape of the end connector, USB 3 is the protocol.

So yes, you can have USB 3 on USB C or older shapes of USB cables.


Whats harder is getting mobile phones to work as a damn usb webcam without paying money for it.


I bet USB 4 is like calculus compared to these.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: