When you connect two USB-C devices today, you have almost no idea what is actually going to happen, which device is the master, and which way power will flow.
While having one connector and cable type for everything seems like it would be a good idea, in practice it's turning out to be a giant mess. Maybe it'll clear up in a few years, but given the race to the bottom in price and quality in the accessory market, this seems doubtful.
Maybe that's on USB spec people for not having good material, but on the other hand maybe that's on the manufacturers for not hiring EEs who can actually read a damn spec sheet properly...
I think this is the hardware equivalent of expecting programmers to write bug free code on the first try.
Most of the time when a programmer wants to implement something a little complex and a little outside their expertise, they use an external library. Likewise, EEs will often buy a chip produced by a third party manufacturer and use that to handle it. But the abstractions available to EEs are often a bit more leaky than those available to us programmers, since they are more constrained by the laws of physics. and as a result that third party chip is still harder to use than a software library.
Then they add this part to their design, and lay out the schematic and the printed circuit board according to best practices by the manufacturer. Once the first boards return they put measure the signal levels and waveforms in their circuit to verify they are with the specifications and that they match the ones on the eval board. Then they will 'corner' test the circuit (corners are low/hi temperature range, low and high voltage range) and verify it continues to work according to specification at all the 'corners' (if it does then you are generally ok with assuming it will work at all points "inside" those for corners.)
There are people who are either in a hurry or don't care who wire something up according to the application note, power it up once and call it 'good'. I've seen a number of cost reduced 'clone' equivalents that meet that description. @kens has done a number of blog posts that show this sort of mentality in detail.
It makes sense for a monitor to power the laptop because it’s mains connected. But then you have one of those classic programming problems: if there are multiple inputs providing the same thing which do you consider the source of truth?
What you're describing might go beyond that though, in particular if there's multiple ports and hubs involved, that could be not just a usb problem.
The real difference is USB was a completely new technology with zero market penetration so growing pains were inevitable. USB-C should not have these growing pains as they should have learned from 20+ years of developing the standard.
Yes. Just read the NathanK or Benson google+ pages. Very few USB-C accessories are compliant - almost every one has some bug in its implementation, some worse than others. Apple cables are good, but even they took a couple iterations to get it right.
Interestingly, there exists a sort of upper threshold where the amperage is less likely to be fatal due to severe cramps preventing fibrillation IIRC.
Edit: the above holds true for AC but not necessarily for DC from a battery, where the thresholds are typically higher. But the heart is very sensitive and fibrillation occurs at very low currents passing directly through it.
EDIT - here's some math:
The U.S. Navy FIRE CONTROLMAN Volumes 01 - 06 & FIREMAN gives 1500 ohms as a common resistance approximation between extremities, either hand to hand or hand to foot, for an average human body.
I = V/R = 12V / 1500 Ohms = 0.008 A
8 mA is not nearly enough to kill you.
But sure, it's probably best to not touch the car battery terminals when your hands are bleeding.
If electricity takes all paths, then whats the point of the mantra? It includes the highest and least resistance, and everything in between. The mantra is almost deliberately meant to mislead?
In short, it's complicated.
So with a single low resistance path, give or take, all the electricity will go that way. But with two paths of equal resistance, half the electricity will go each way.
Maybe mantra wasn't the right word, Wikipedia calls it a heuristic. Regardless, I agree it is very misleading and I would never say anything like that when teaching or explaining electricity. Why does it keep getting repeated? Who knows. Inertia of the masses without a much understanding of electricity would be my guess...
This spec by contrast must be far more complicated; also since it is so much faster, can't quality components supporting it all be more expensive - leading to cutting corners as a cost cutting feature?
Also if as a consumer you can't predict what "should" be about to happen (which way power will flow, what will be host) that is down to a bad spec re connector types. For example couldn't a universal visual indicator on a port (including the bottom of a phone) indicate its capabilities and what will happen very clearly? (Through symbols.) They do not. Couldn't a software popup make you choose? They do not.
Not to mention that it is not clear what should happen. If a phone can power a USB peripheral (like usb on the go) that means it could charge a second phone.
So if you connect two phones which one should charge the other? With old versions of USB phones the answer is simple: whichever one you put an on the go converter into, making it a master instead of a phone's usual role as a slave.
The fact that the spec does not make these as clear for USB C is down to the spec and design.
Can't USV C simply be badly designed? (overdesigned, underdesigned, badly designed.)
 Consider that a 1-page spec that can be read and implemented quickly in 15 minutes, can be implemented by anyone. A five-thousand page spec that takes 1,000 hours to read and understand can be implemented by no one (only teams). In between we have both previous versions of USB and USB C - but is it possible that USB C is too far in the latter direction?
They do when I e.g. connect two android phones together.
It's not dissimilar to the box store HDMI/audio/etc cable strategy: Cheap, absolutely garbage cables at a mid-range price, or upper-mid-range cables at grossly over-inflated prices. Consumers tricked because of course the $200 HDMI cable looks noticeable better than the $20 one. Reality is the real retail prices of cables they're comparing should be more like $2 and $20, and the difference between that $20 cable and one that actually would cost $200 is only noticeable with high-end test equipment (or by 'audiophiles', who have the super-power of being able to see/hear a difference in cables as long as they know the price).
There’s also tons of times where the device you want to power from the battery starts charging the battery instead. With both ends of the connector being identical I’m not sure there even is a _good_ way to fix this that ordinary consumers will understand, the better solution might have been to still keep different cable ends. I hope this mess eventually gets better.
The LEDs don't have to be bright, just dim little indicators that are only really noticeable if you're looking for them. Then you get an instant idea of what the devices are doing.
I've learned somewhat superstitiously that if I don't connect and disconnect the USB-C cables in the right order the laptop may stop responding and I have to do a hard reboot. I thought this solved the problem but even now it still occasionally happens.
Your question makes me wonder if something like this isn't the cause (Mac misreading USB-C signal).
So yes, it is possible to detect which end gets connected first, but this absolutely shouldn't have any side-effects. It is unintuitive and fragile (if the 'master' device restarts, it will think the other end connected first, suddenly becoming 'slave').
Device A receives the cable first, upon receiving the cable, Device A notices that the cable is not "hot" ("hot" in the sense that there is voltage/activity on the wire), so then Device A decides to "turn on" the cable, make it "hot".
Now Device B receives the cable, it notices that the cable is already "hot" and decides that it must be the second device.
I would imagine it could work that way,
Im also pretty sure that the usb cables are "dumb" cables and not active cables like QSFP (these have chips at each end).
This is all done before power is applied, as USB Type C is very explicit about not going hot (apart from a weak 5V V_conn for powering the cable).
Also, QSFP is not a cable, but a pluggable spec. A QSFP module is active, but the fiber optic cable you connect to it to is dumb. Link detection there works by sensing beam power.
The core problem with power is that you can't really pump laptop charge wattage down a USB cable without raising the voltage above 5V. Once you accept that, the engineering gets quite a bit more complicated and you have to negotiate so one side doesn't blow the other up.
And, they're not expensive. EEs are just cheap mother fuckers, often, and they think they can do it "well enough" themselves and save a cent or two on the bill of materials.
It is the fault of the protocol.
USB is a clusterfuck.
Is this just a "tragedy of the commons" where every manufacturer expects the others to follow the spec so that they can skimp, or is there some kind of fundamental flaw in the USB-C spec that is making it so seemingly dangerous and difficult to use correctly?
As others note, the poor implementations wind up causing us to use only the adapters shipped with the device in the first place, which is actually a worse situation than with standard USB - right now, I know that I can plug a device into any USB charger and get 5 volts. Amperage may vary considerably, but unless the adapter is a real lemon (and yes, I know those exist), it's not going to kill my phone or burn my house down if I don't get the planetary alignment of adapter, cable and device right.
- the consumer - who buys the product
- the product - who buys from the manufacturer
- the manufacturer
If the manufacturer wants to only implement a subsect of the full spec, they should be allowed to do that. This has additional be benefits with regard to minimizing the resources used.
Granted, its a cool spec with great potential and i enjoy seeing it in action with regard to sbcs. But the manufacturer is also a customer. If it doesn't work for them then there probably needs to be some changes
First there is the C plug and cable, including provisions for converting between C and the various A and B sizes. This is the spec that introduce various resistors to signal if the cable is a converter or (a very big or that has created much problems) if it can handle various watts at 5V.
then there is the power delivery spec that on paper can be used with any USB plug (yes, even your old A and B formats), and allows current to go in either direction at up to 20V.
And thirdly there is a continuation of the 3.0 data spec, 3.1, that include a provision for using various wires in the C cable in an alternate mode. This mode allows anything from digital video to PCI bus traffic to travel over the same cable, if both ends support the protocol traveling over the alternate mode wires. outside of alternate mode the 3.1 data spec can also be used with 3.0 A and B ports.
So even if your device have a C port, it may not be able to handle more than 5V at 0.5A and 1.0 data speeds...
> USB terminal USB Type-C terminal Used for charging or for connecting to the Nintendo Switch dock.
As does the Japanese site:
> USB Type-C™端子
As much as it pains me to say it, a "pay us to use the port shape" group like HDMI that will threaten litigation unless you pay them to use the port would probably have prevented this kind of thing from being as widespread with USB-C as it is. While just about everyone doesn't want that to be the case (myself included), I don't see any other way of aligning incentives to make it harder to use the port/spec incorrectly than it is to make it correctly.
It solved enough design issues for them to let go their old habit of having proprietary connectors for charging. Which made them a lot of money.
(Why do you care about DisplayPort? Can't you get a converter in the other direction?
Note: There is probably an ethical issue with this, but it doesn't jump out at me.
But I've become very leery about actually trusting it in practice - There are so many examples of bad cables, or devices which don't quite follow the specification, causing things to break badly.
If you end up having to follow a defacto "Only use 1st party tools" rule for safety reasons, I'd almost rather manufacturers went back to proprietary connectors. Those aren't inter-operable, but at least they don't pretend to be and risk me breaking everything.
Half of those are not spec-compliant. There are only 4 different legitimate types of cable: 3.0 non-power/thunderbolt, 3.1 non-power/thunderbolt, power-capable non-thunderbolt, and thunderbolt. That's 3 more types than there should be, but let's not make the problem worse than it is.
That is insane. Who thought this would be ok?
Is there any cables that are fullspec?
> At launch, there'll be one passive Thunderbolt 3 cable that supports Thunderbolt, USB 3.1, and DisplayPort 1.2, but with a max bandwidth of only 20Gbps.
Whoever thought that using the same cable for thunderbolt was Ok. Probably Apple or Intel
Most of it is not even with the cables, but the chips and software at either end.
I could be wrong though. I've got a similar setup and I've only seen the error once or twice.
I also refuses to mount my phone straight usb-c, and the known workaround is to go usbc -> a -> c.
At this point in my relationship with Apple, I can only assume this is deliberate.
I got the new xps13, it only has thunderbolt/USBc-C ports and it's pretty much 50/50 whether my external monitor will wake up.
I figured it was the cable and was about to buy another one but you saved me the trouble.
I also got a hootoo adapter that's supposed to support power delivery but that hasn't worked either :/
Why anyone thought it would be different this time around, I don't know.
First of all, Apple came up with a slightly different set of resistors on the data pins (leaning that the data in and data out will read slightly different V) to signal to their devices that they could go above 1A draw while charging.
The official charging spec says to put a resistor between the data pins (inside the charger to support detachable USB cables), or simply to short them with a blob of solder.
Your JBL speakers are likely reacting as if it was plugged into a normal USB port.
also keep in mind that the charging spec came about as China and EU wanted to deal with the piles of incompatible chargers that was going into landfills.
Thus older USB ports max out at 5V 0.5A, as that was the spec back when USB 1.0 was launched. Also why some external HDDs come with a Y cable that draws power from two USB ports to get the motor spinning.
Both the update and third party accessories seem implicated to a degree.
I think maybe I just found a new preferred provider of cables and chargers.
It also gets me a full recharge of my 15" USB-C MBP (though you can't use the MBP and charge at the same time, it takes a while).
My 40W Anker PowerPort wall-charger also works.
Now I'm a bit worried about trying anything else, though. o_O
(Also, if you didn't know about tindie.com, and you are a certain sort of person, then I apologize for consuming the next couple hours of your day.)
I haven't played videogames regularly for a decade though so perhaps things have changed. I now use a PS4 controller to play some games on my computer, and it's really nice, integrates naturally.
The Wii and Wii U and DS have dozens of these stupid "quirks" which are the result of Nintendo just not knowing how to do things correctly. The reason I didn't buy a switch was because I was so infuriated by the Wii U's incompetence that I was certain the switch would bring me nothing but misery. I'm feeling pretty smug and validated right about now. This isn't the first of the switch's failures, either.
Nintendo are just miserably incompetent at firmware and software, I really wish they would just back some other hardware company to make the console for them, and Nintendo themselves should just focus on making games.
Better yet, Nintendo should just make their "fun" peripherals for PC, mobile, and existing consoles, and release their titles cross platform. Yes I'm bitter.
Macbook charger for instance is missing the 12V, so it charges slow off that charger since it ignores the 15V rail. Has to fall back to 5V or 9V in theory...
There's also discussion about the new 5.0 update being buggy and compounding the issue.
I wonder if something has changed or if this is an outlier, or perhaps it's just my sampling (which of course is hardly enormous) has simply entirely fallen in a pool of strictly conforming equipment. I can certainly believe the last possibility.
I'm a software guy; what do these revelations imply about which other accessories may or may not be unsafe to use, whether they work or not? In particular, third-party charging cables. My girlfriend uses her MacBook charger to charge her Switch sometimes, and now I'm worried it could brick it one day.
USB C itself has been sort of a minefield of adaptors getting the spec wrong, so I doubt anyone is surprised that docks are buggy and even capable of damaging parts.
I don't think they have ever claimed that their USB-C connectors were compliant with the spec.
Nintendo (and up until recently, the videogame hardware industry at large) has a long history of making their own connectors...
When I travel and stay away from home with my Switch, sometimes I really yearn for the dock-to-TV functionality so I can play it with family/friends.
The dock is a pain to carry around
Seems more likely what whoever implemented that part of the Switch did a poor job. Was it even done in-house or did they contract a 3rd party to hack it?
The specs page says there are three USB 2.0 ports and a power port (which, by looking at it, uses the same connector as USB-C).
Or is it more complicated than this?
Although at least they did the recall and presumably will be careful not to make such a costly mistake again.
Both have been solid with the Nintendo Switch. I have tried the Apple charger and it did exactly what this article said: basically stopped the ability to charge the Switch until I did a hard reboot.
(i.e. their 'compliance' has been tested on 240V scenarios)
All of this is just errors and flaws, rather than attempts to trip third parties. They're extremely easy to replicate if you want to make a Switch accessory, but it means that shit might happen if you try the Switch with a non-Switch accessory.
Considering Nintendo's past of custom connectors and priority for making children-friendly items, I do not think Nintendo would make a device with a standard plug where children plugging in a standard charger might blow things up.
The article said that their conclusion was that they main factor was the proliferation of a large number of low quality games from numerous third party publishers. This was a time when most people's only source of game reviews was magazines or person experience of friends. So a lot of games were bought blind, based just on the claims on the box. After people got burned on a couple low quality games or so, which cost about as much in today's money as today's games, they often stopped or greatly reduced their game buying. Hence, Nintendo decided that they had to control who could publish for their console.
Speaking as someone who was programming games for the Mattel Intellivision at the time of the crash, and got laid off because of it, Nintendo's analysis seems quite plausible to me .
Consider Intellivision games. The history for the other consoles is similar, but I know Intellivision best so that is what I will use.
Once upon a time, the only people who could write Intellivision games were engineers from APh. APh was/is an engineering consulting firm in Pasadena, CA, which was founded by and largely staffed by former and current Caltech students. APh designed the bulk of the electronics for the Intellivision, and more importantly wrote the ROM and the early games.
Not too long after that, the set of people who could write Intellivision games expanded to include Mattel engineers.
Games written at APh or Mattel could take full advantage of the ROM, which provided a lot of tools to make writing games easier. If you did not use the ROM you had to write a lot more code, know a lot more about low level hardware details, and you might need to use a bigger cartridge which could raise your costs significantly. (For example, the ROM provided high level sprite  handling. You could just give it a list of images, and tell it to animate the sprite with them at a specified rate. You could tell it to give the sprite a velocity and it would deal with moving it. You could give it callback routines for various types of collisions, such as sprite hits sprite or sprite hits background, or sprite hits edge).
The set of Intellivision game writers further expanded when people who had been Intellivision developers at Mattel or APh started leaving to form their own companies. Several key APh Intellivision people formed Chesire Engineering, which wrote Intellivision games for Activision. Imagic was formed by former Atari and Mattel people.
These companies had people who had used the Intellivision ROM. They couldn't take the documentation with them when they left, but they could direct a clean room reverse engineering effort to produce their own documentation sufficient to train new programmers in how to use it. So those companies could produce top quality Intellivision games, too.
But then companies start coming on the scene that did not have any ex-Mattel or ex-APh people, or anyone from Imagic or other places that had their own documentation on the ROM. These companies most likely started with the hardware documentation (both the CPU and the graphics system in the Intellivision were off the shelf General Instruments parts), and then reverse engineered just the boot code from the ROM to figure out what it expected to find in a cartridge. Then they handled everything themselves. This made it quite a bit harder to do a top tier game.
There were also console makers who wrote games for competing consoles. Mattel did games for Atari and Coleco consoles. Coleco did games for Mattel and Atari consoles. I don't remember Atari doing games for other consoles, but Wikipedia claims they did Centipede and Defender for Intellivision and Colecovision, and Pac-Man for Intellivision and Galaxian for Colecovision. The console makers, unlike the small, poorly funded companies from the above paragraph, had the resources to properly reverse engineer the ROMs of other consoles and then use them in their games.
A very large fraction of the games from this last group of companies were pretty bad. They were often rushed, because these companies were often not well funded--they had to get their game done and out fast. That also meant they could not get review copies to magazines with enough lead time for the reviews to come out before or concurrent with the game's release. The market went downhill rapidly at that point.
Anyway, I find it quite plausible that if the industry had stayed with each console only having games written by its own associated developers, the competing console makers, and a couple big third parties (Imagic and Activision), the crash could have been avoided.
 Actually, at the time, I was programming for what would have been the next generation Intellivision, which at that point was just a big box containing the CPU and a prototype of the next generation Intellivision graphics chip implemented as several wire-wrapped circuit boards full of 74xx or equivalent discrete logic chips. We could leave work at night, get back in the morning, and find that the hardware people had pulled an all-nighter and changed the design of the graphics, massively breaking all our code...except for Hal Finney's code which always managed to just take a few tweaks to fix no matter how radical the hardware change.
 They were actually called "moving objects", not "sprites", in all the Intellivision documentation and code, but I'm going to use sprites because this is what nearly everyone else calls hardware moving objects.