Hacker News new | past | comments | ask | show | jobs | submit login
USB-C Explorer – A development board to get started working with USB Type-C (reclaimerlabs.com)
311 points by walterbell 6 months ago | hide | past | web | favorite | 72 comments

Additional resources:

USB-C for Engineers, https://www.reclaimerlabs.com/blog/2017/2/1/usb-c-for-engine...

An example of USB power delivery, https://www.reclaimerlabs.com/blog/2017/5/16/example-usb-pow...

Twitter thread which observes “ethernet-like” protocol over control channel of USB power delivery, https://twitter.com/whitequark/status/1035764804886126592

You know what I'd love? A tool that lets me identify what USB-C features a cable supports. As I understand it there are multiple cable variations, but it's not possible to identify visually.

It would be even cooler to have a device that you could plug into a port to identify what USB-C features the host supports, but I don't know if that's possible.

In general I like USB-C, but having such a large mix of capabilities without an easy way to identify what's supported is a big pain point.

Between USB-C and mSATA/mPCIe/M.2 it seems like in just few short years the state of hardware UX reverted from "match the shape to the hole" to "buy pre-matched / pre-assembled and don't you dare lose or mix/match anything." What happened? Psychoactive substances in the water? Hostile takeover of the standards bodies by marketroids? Anyone here have an inside perspective?

Simpler: Hanlon's Razor.

When engineers see 5 different physical ports, they want to consolidate, because there's no good reason for them to be different, and there are real benefits to be gained by having them all the same.

E.g., Stuart Cheshire (Chairman, ZEROCONF Working Group), 2002: "My hope is that in the future — distant future perhaps — your computer will only need one wired communication technology. It will provide power on the connector like USB and FireWire, so it can power small peripheral devices. It will use IP packets like Ethernet, so it provides your wide-area communications for things like email and Web browsing, but it will also use Zeroconf IP so that connecting local devices is as easy as USB or FireWire is today. People ask me if I'm seriously suggesting that your keyboard and mouse should use the same connector as your Internet connection, and I am."

It's a great vision, but unfortunately, on the path from here to there, features get cut. If your product needs to support 5 different protocols, and you have to cut 1 to ship on time, that's still pretty good as far as your own product is concerned (80%!). If every manufacturer does this, though, you end up with a mess of cables and connectors that aren't quite compatible with each other -- and users are more miserable than before this whole exercise started.

I don't think anyone wants their USB-C device or cable to be intentionally incompatible with another USB-C device or cable. They just don't make 100% compatibility a hard requirement.

When engineers see 5 different physical ports, they want to consolidate, because there's no good reason for them to be different, and there are real benefits to be gained by having them all the same.

Only certain engineers. Don't group the ones who love gratuitous complexity and "value engineering" with those who think complex standards like USB-C are a horrible idea and would rather have separate and simple interfaces.

There's a reason RS-232/485 (along with good old D connectors) are still extremely prevalent in non-consumer equipment.

I think that's a very developer-heavy perspective. As a user who doesn't know anything about hardware communication standards (lucky me, perhaps?) I think the fewer connector types there are, the better.

With a single USB OTG cable I can connect almost any peripheral to my phone, including my XBox 360 controller and my endoscopic camera. Why not a monitor as well? Why do they have all these other weird non-USB connectors like HDMI and DVI? I'm sure there's a good answer from a developer's perspective but from a user perspective or makes perfect sense for there to be a single connector to rule them all.

Edit: another example is that sometimes I connect more than one mouse or keyboard to a single computer. Imagine if there were still dedicated ports for each type of peripheral – would I need to buy a special motherboard with two mice slots? Or a mouse port expansion card? What a nightmare.

Completely interchangeable cords for everything is an excellent goal that everyone agrees would be a beautiful thing. That idea/direction is not what we're criticizing. It's the implementation that we're criticizing. In particular, the implementation that still requires separate cords for separate tasks (because each USB-C cable supports an unknown subset of the capabilities) but removes the labels and visual / physical cues that you could use to identify which tasks it supports. Not only does this fail to achieve the dream, it fails to even live up to the previous generation.

Eventually we'll reach the dream and it will be great, but there was no need to jump in the pit of spikes on our way there. It wasn't blocking the road, it wasn't camouflaged, it was out there in the open, and the USB-C guys decided it would be fun to jump in. Why?

And to think that USB was designed to kill Firewire and require a PC-in-loop because Intel was scared of peripherals that could talk to each other w/o an intermediary!

Now we have those smart peripherals and 20x the complexity. Thanks Intel. And thanks Apple for making Firewire too expensive.

In my alternate universe my home AV equipment networks with GPIB and 1Ge PoE with SCTP.

"People ask me if I'm seriously suggesting that your keyboard and mouse should use the same connector as your Internet connection, and I am."

We already had the technology to do that decades ago or more. If I had the right power and influence, I'd take the Ethernet connector and magnetics, make it smaller, add default POE support then release the smallest possible IP stack so that it can be implemented on small microcontrollers too. Very small devices would talk IP only, bigger ones can add upper protocols such as TCP, UDP etc. All that while gradually phasing out USB. This way we would have open protocols on open media with some peculiar advantages over USB, openness aside: near realtime, stackable (add a layer and you go to the Internet), intrinsic security (USB devices are getting more complex every day, yet there's no such thing as a firewall against malicious ones; Internet protocols support filtering by default), galvanic signal isolation (excellent for instrumentation or audio stuff) and the cable can safely extend a lot more than USB ones. Where low power consumption is vital, magnetic transformers could be swapped with optocouplers.

Except for cost or being "too open to be profitable" (royalties) I don't see a single reason why a beefed up Ethernet shouldn't replace USB entirely.

If it worked, it would be great.

It wouldn't, though. Instead, it would collapse under the weight of IP's management capabilities and worst-in-class feature creep. Despite decades of effort by people like Stuart Cheshire, getting two computers to talk over ethernet is still orders or magnitude more difficult than getting them to talk over a thumb drive. Why would I want that for my mouse, keyboard, and display?

Don't get me wrong, I've read Cheshire's zeroconf book and I'm a big fan of zeroconf IP and mDNS, but technological solutions to these problems are a dime a dozen. Political coordination is the difficult problem and from that perspective mDNS hasn't succeeded (Windows). The network guys can't even coordinate well enough to get name resolution working and I should trust them with doing application-layer coordination competitive with USB and PCIe? Please.

I urge you to look at IP (v6) stacks that are small enough to be feasible on small microcontrollers, or even better, their spec, e.g. 6LoWPAN, and all the trade-offs and shortcuts they have to make.

Then, re-evaluate if the increased complexity and energy consumption is worth it, e.g. just for a mouse or keyboard.

Then go ahead and specify how you want this spec to have comparable compatibility modes to USB-C, e.g. if you are going to run displays and external hard drives, as well as HIDs and power...

Considering the alternatives, USB-C is not too shabby for the intended uses.

I didn't have mice or keyboards in mind, but yeah, a full stack would be surely overkill for their minuscule uCs; this would call for more intelligent devices, peers rather than slaves, which for smaller ones would be a problem now. I'm far from an expert and you're 100% right on the involved complexity, still I dream of a standard for devices and systems where just adding a software layer can extend a board to board communication to a geographic level, and Ethernet seems very close to that.

The temptation to "standardize" 20 different indistinguishably incompatible variations of anything is not new. The last several HW generations did a pretty good job resisting it, though, and this one emphatically does not. Something changed. I'm wondering what that "something" was.

I think that "something" is the expectation of wireless takeover for nearly all _consumer_ use cases and the ensuing competitive landgrab over future protocols above the PHY layer. It's an unguided attempt to see what sticks in the interim as part of this peculiar technological purgatory.

Specifically motivated by certain interoperability and regulatory floodgates being opened _worldwide_ such as high bandwidth line-of-sight spectrum becoming unlicensed, vast-area indoor-penetrating national digital dividends from band compaction transitioning TV broadcasts away from wasteful allocations, and (at least the US) militaries becoming frustrated enough with industry in prioritizing the supply chain for hyper-specialized and thus useless-for-DOD- R&D consumer circuitry rather than general purpose hardware that there is essentially an unspoken implicit nod to defense contractors where fulfilling the requirements of reconfigurability will land you the necessary political backing to avoid the business trap of previous administrations' regulatory bodies shutting down your tech and revenue-generating capacity.

Then of course there's the growing trend of consumer choice towards wireless technologies, such as forgoing traditional ISPs for mobile broadband LTE which today is fairly on-par or even lower than many wired WAN connections in terms of latency after the backend transition to an all-IP core network with far improved reliability, robustness, mobility, shareability, and flexibility in choosing MVNO bandwidth resellers that will now offer low-cost reasonable rate-limited wireless pipes to the Internet without the bullshit of yesteryear.

Anyone who isn't susceptible to advertising gimmicks knows that with improved latency, today's cheap rate-limited "unlimited" bandwidth is perceptually infinite for the amount of data that a human brain can even process. As long as you choose your 'data providers' carefully and eliminate unreasonable people from your social circle such as 'audiophiles' or 'videophiles' who insist on conspicuous-data-consumption for snob appeal and completely block all the malvertising, tracking junk, and generally clueless companies with anti-hero images or insane JS dependency webs, you can have a far better UX time on the 'Net than anyone else.

Then add in all the technical improvements rapidly happening in the past few years: wireless PAN peripherals for a better connection UX, compact high-density rechargeable batteries, industry adoption of inductive wireless charging outside of niche cases like water-immersible toothbrushes, low-energy wireless transmission developments such as building on mature RFID back-scattering techniques as a legit form of transmission when coupled with processing-intensive error correction and noise-immunity coding that are becoming much more feasible now even for lowly uCs, and software-defined radios becoming extremely commonplace due to cheap/fast reprogrammable logic and the availability of common open-source low-level networking stack components available exclusively in software such as Linux's softmac or GNU Radio blocks.

Finally mix in politics and business realities such as Chinese-workarounds to any current and future DMCA-enforced IO-DRM, lessening grip of FCC authority against technical-workarounds to government spectrum mismangagement, public awareness of things like TEMPEST surveillance which removes one of the last selling points of wired devices as privacy enhancements allowing consumers to finally jump ship into the deep crypto waters, and the drive for miniaturization of cables and connectors is physically at odds with things like realizable tensile strength, power dissipation, heat tolerance, shielding requirements, and the humorous scenario with IO pads on a silicon die eventually consuming far more real-estate than the processing circuitry itself all while the logic transistors switching at GHz rates are far more effective radiators of high bandwidth and potentially high goodput, assuming competent software engineers that can modulate those stray emissions instead of requiring another pass of electrical engineers to remove that capability with filters to comply with regulations, once you factor in the signal loss from the impedance mismatch between small logic and huge external pins going the wired route.

The original designers thought people would just make uber cables with full support. Manufacturers noticed they could shave off a few cents removing support for features "nobody uses".

Did they really expect every cable to have an active chip in it just to support 5 amps instead of 3?

And it should have been obvious that charging-focused cables with no superspeed lanes would exist.

When it comes to different qualities of wire, and Superspeed vs. Superspeed+, I'm ready to blame the manufacturers. But is that actually a problem in practice? Are there 1 meter cables that have been tested to not support Superspeed+?

And the mess of what-supports-what with thunderbolt speeds is definitely Intel's fault.

The less you twist the cable the less wire you use - but at the cost of noise. Manufacturers have no problems making the cheapest cable that just barely negotiates at the lowest speed level.

Like with SD card speeds its a hard sell on the store shelf to get the more expensive one. People don't know how Superspeed vs Superspeed+ is going to affect them. Plus given the markup at Best Buy people can barely afford the cable as it is.

I always found it funny Fry's has a separate cable section near where they sell soldering irons for people who know better.

This is how market competition works. When choosing which product to buy, a few people do in-depth research, but most people just look at what is immediately obvious. If a difference in features or quality isn't readily apparent at this stage (and/or the buyer is not educated enough on the subject to understand/appreciate it), then it doesn't provide a competitive advantage.

Put another way, people will not buy a product just because it is better. It only matters if it is better in a way that is hard to miss in the short time (think 15 seconds or something) that someone might spend deciding which product to buy.

This is one reason competition isn't a silver bullet for ensuring the best products win.

That's assuming there is no iteration. The people who buy products without doing the research get burned and so next time they do the research.

There is no easy way to avoid this. If all somebody knows is that they want a "video cable" they're going to come home with an RCA cable. At best you have someone you trust who can answer your questions, at worst you pay the cost of learning for yourself.

I think it's just a grotesque proliferation of complexity and "feature-driven standards development".

You're probably right, but the temptation to sacrifice HW UX for features isn't a new phenomenon yet the equilibrium seems to have shifted noticeably in the latest HW generation.

> What happened?

It became very cheap to put inside a chip. That's it.

Now we get a decade or two of learning why that chip firmware needs to have open-source mechanisms that enforce separately defined policy.

For $34, that looks pretty good (though I hate that we need this). I'd pay much more for something that could do signal integrity measurements on the cable as well.

That's only one logical way forward for guys like YZXStudio to add simple vector analyzer function to their 'USB testers'.

Probably offtopic, but do you know if there's any relationship between RDTech and YZXStudio? They've got very similar designs, but RDTech doesn't seem to be a "simple" cloner.

They copy some parts of design from each others, that's different culture of innovations in China. Bunnie has some explanations of that phenomena tagged 'gongkai' https://www.bunniestudios.com/blog/?cat=20

I figured it was something like this. I wonder how you distinguish gonkai from straight up cloning though. For the USB C board that I posted in this thread, the copies have taken the text but they're missing half the features - not an improvement

> I'd pay much more for something that could do signal integrity measurements on the cable as well.

And how do you propose performing such measurements for any less than the minimum 6-digit entry cost to acquire the necessary calibrated instruments for any meaningful test that might be considered proper?

This is exactly what I'm looking for at the moment! This + passthrough would be amazing!

And that would also check if the cable complies with the specification.

Just purchased a TS-80 soldering iron and have been incredibly frustrated by the power requirements. While USB-C, the device requires 2A with 9V. Out of four Type C, PD chargers, I don’t have anything capable of this combo. Shopping around for something other than a dedicated QC3.0 charger, it’s been interesting to see how few of the manufacturers publish a convenient and complete set of charging specs. Something like this will certainly come in handy.

It's unfortunate but as you've noted, the TS-80 is QC3.0 only. This means you're going to need a dedicated charger (Though I thought the TS-80 shipped with one?). This means that it doesn't support negotiating with USB-PD devices to get the power it needs even though they'd be otherwise compatible. If they had been able to do QC 4 or USB-PD directly then it'd be a lot easier to find compatible devices (QC 4 added compatibility with USB-PD up to 27W).

I wonder if it'd be possible to make a QC2/3 -> PD translator? (for up to 12V/1.5A)

Not a huge reason why you couldn't. There's some differences in current and voltage resolution; PD does 20mV increments of the voltage and similar for current, depending on the mode you ask for; and QC 3.0 does it in 50mV and 100mA steps IIRC. So you'd have to round up for the current and hope for the best with the voltage which would likely fine, cable losses would be more than the differences in steps. But that could result in less accurate regulation and such on the other end if it jumps too much and causes it to then request a lower voltage until it drops the next step.

Yeah... I just saw the components to build one.

If you want to go from PD to QC3, someone has a USB PD sink board that asks for the most power from the PD source and outputs that as a DC voltage.

Then, aliexpress sell the QC3 boards that take an input voltage.

Quite a frustrating limitation!


1×USB-A(Orange): 5V-0.3A/9V -2.0A/12V - 1.5A

1×USB-C : 5V-3.0A/9V - 2.0A/12V - 1.5A

it is quality https://lygte-info.dk/review/USBpower%20Xiaomi%20Mi%2060W%20...

Awesome you linked to lygte-info.dk! It's one of my go-to sites to find decent tests of chargers.

Lots of reviewers simply don't do decent testing like the writer Henrik Jensen does. For instance he does over-voltage testing, which sometimes breaks an otherwise decent-looking charger.

Looks like I'll be sticking with a TS-100 that runs off of a Thinkpad power supply just fine.

I'd still check out the TS-80, given a good USB battery pack it can be more easily portable than the TS-100 and some of the design changes are really nice improvements on the TS-100. I kind of hope they take them (most the tip design) and spin them into a new rev of the TS-100 for an updated setup.

The EEVBlog does a good review of things:

https://www.youtube.com/watch?v=_Z9es-D9_8g [TS-80 Review]

https://www.youtube.com/watch?v=EEYt2jTTVdE [TS-100 vs TS-80 comparison]

Warm yourself with the tire-fire that is USB-C PD [1]

[1] https://twitter.com/whitequark/status/1035729916149604353

Oh boo hoo they enumerated an enormous number of corner cases, handled backwards compatibility, crappy cables, all sorts of things. It also reflects decades of back compatibility and painfully learned mistakes (consider mini-A) not to mention some weird pathologies (weird naming like hi-speed/ full-speed/ super-speed).

It sucks that the standard is so complex that a small number of design houses and consulting shops will end up with a de facto cartel of understanding the Sacred Knowledge, but nobody's stopping you coming up to speed by reading the whole spec and doing a lot of implementing. But it's not like you can build a solid TCP stack these days either simply by reading the original late 70s RFCs.

And you don't have to use this spec -- you can still use RS-449 if you think the USB spec has become overly complex. In fact I used that in a design only a few years ago!

> But it's not like you can build a solid TCP stack these days either simply by reading the original late 70s RFCs.

I literally did this. It's called smoltcp (https://github.com/m-labs/smoltcp) and we use it in production as an lwIP replacement with great results.

‘You can’t.”

‘You can and I have running code”.

This is why HN is great.

Does it work reliably on the open internet? AIUI, it's interoperability with all the edge cases of badly implemented protocols that leads to statements like GPs.

I think it would be cool if a project had a self-imposed token count limit. The project would never be allowed to grow more than X tokens long.

Then you can guarantee the project is small and haiku-like forever!

LWIP started off small, and it grew...

The "smol" in "smoltcp" is mostly talking about the internal architecture and exposed API. For example, it will never support true zero-copy sockets, because the API burden due to restricting itself to safe Rust is too high.

I don't think any TCP/IP implementation is going to be "haiku-like", the protocol stack is way too messy.

I love Hacker News. I was just looking for this library to use in my embedded toy.

the worst thing about having read the USB PD specification is having to live with this knowledge

Having done the same, I agree completely

Yeah, good luck with 5A on such small pins an pcb tracks, after prolonged usage. I see many problems with existing 500mA delivery, increased contact resistance, cold solder joints, and so on. Insurance is a must have.

I like how you can ask it for the country code the power is from

It's scary to consider that whoever added that "feature" was probably thinking of region-locking...

Before, I used to think that hardware specifications and protocols would be simpler purely because hardware people don't really like complexity and it would make implementation harder, and that was usually the case. Now it seems like elements of Enterprise Java have slowly creeped in...

The fact that the adapter has far more nonvolatile memory than it needs is somewhat unsettling, not just in the wasteful sense but also in the "hidden surveillance device" sense.


The spec includes cryptographic signatures for the whole signal chain for "authenticity".



And companies have in fact used this to make their devices only work with their own brand of chargers. I know HP has at least, because I have one. Does a pretty great job of defeating the "universal" aspect of the Universal Serial Bus.

In addition to preventing the use of third party power bricks, it also makes all of those USB-C docks with power passthrough useless.

My god. I never thought I'd see the day they DRM power.

I am genuinely shocked. This is definitely not widely known.

And meanwhile Enterprise Java has become lightweight and simple (to use at least, - and the fact that TomEE exists implies that it possible to implement and support a compatible server even for a small company/team, as long as they know what they're doing.)

CS, EE, and computer engineering are going to be great careers for decades yet to come.

Is EE really a good career choice, though?

umm, is it not? Literally 1 week into an EE major.

Learn to program, really well.

>Warm yourself with the tire-fire that is USB-C PD [1]

A good idea is to not to let too many people on standard bodies.

Intel alone had more than enough clout to push the mass adoption.

I've really wanted to take a shot at a USB-C KVM. I realize this would require USB-C DisplayPort DSPs and a lot of knowledge that's waaaaay over my head. I know some of the ViewSonics have build in USB-C KVMs, but I don't think any other monitor manufacturer has gone this route yet.

actually you just need passive signal switches for this, rather simple.

Can you provide some evidence? If it is so, surely many already have done this.

What's the canonical project for the source code? The links in the writeup go to github projects with deprecation warnings and links to other projects. Are there yet more projects?

What's the best one to use if you want to actually build a widget that talks to the FUSB302 and does PD negotiation?

Oh hey, I remember chatting with the author about this project at EMSL a year ago. Glad to see it complete!

I didn't realize you could pull 87W from the Macbook Pro adapter using USB-C!

The fact somebody felt the need to build this (great work btw!) tells us everything that is wrong with this “standard”.


Would you please not post off topic rants to HN?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact