It's hugely about: cost, and the desire of the USB forum to ship a single "does everything" protocol that keeps a large chunk of the industry from choosing to stay on some old version of the spec forever.
Tons and tons of manufacturers of cheap low end devices out there aren't willing to replace a $0.10 USB controller with something that costs 30x as much when they have no need for the high-end speeds of Thunderbolt 3, fast charging, DisplayPort alternate modes, etc. Consequently the spec can either contain a lot of optional features they can choose to ignore, or it can mandate support for all of the 40 Gbps active cable fast charging etc options, and watch as devices partition themselves between the few the need all of that stuff, and the majority that ship simple micro-USB 2.0 hardware.
If they did the latter, then USB 4 is essentially pointless: that's the exact situation we were already in 5+ years ago, except the cheap side was called "USB 2" and the expensive side was called "Thunderbolt". All we've done is rename the expensive side to "USB 4".
If they do the former, they ship a "spec" so full of options that it's already nearly impossible to tell exactly what devices or features any given USB C port will support without recourse to a specifications sheet.
Fundamentally, "one port and protocol to rule them all" is probably just a bad idea. Trying to serve the needs of both cheap slow devices and expensive fast ones puts the forum in a contradictory position, and the only "benefit" from combining the standards that serve each niche is opaque ports and cables that consumers can't easily reason about the compatibility of.
I disagree. The situations are:
1)ports are the same, protocol match : works
2)ports are the same, protocol mismatch : won't work, some user confusion
3)ports are different, protocol match : could work but won't without an adapter
4)ports are different, protocol mismatch : won't work. Adapters might exist but those won't work either.
I'd much rather deal with situation 2 over 3 and 4. Display adapters already have situations where you can get adapters to cobble together DVI and VGA ports but still fail to get a proper connection, so its not really a fix anyway.
The USB foundation should just come up with better branding around what kinds of things a port supports. No reason we can't have a thunderbolt icon next on a USB port. Its strictly better than a new physical port, IMO.
This is great for backwards compatibility, but can confuse users. You could have a TB3 drive attached to a TB3 computer by a USB cable with type C connectors that only operates at USB 2 speeds. It will silently fall back to a very slow transfer.
I don't know how the committee could have done much better though; this kind of fallback is important. Some mandated labeling would have helped a lot though.
From pictures on Amazon , a lot of cables being advertised as Type C USB3 are actually missing the 'SS' labelling, despite being advertised as 'Superspeed' (i.e. 5Gbps) in the description.
 https://www.amazon.co.uk/Anker-PowerLine-Durability-Devices-... - in this case, it currently provides a link "There is a newer version of this item" which links to a USB2 version(!).
Even a simple micro-USB on phones needed a pop-up to select what functionality the user wants to activate (charging, file transfer, host mode, debugging).
The problem is not new, but most software makers are stubborn miscreants that hate good UX (simplicity, discoverability, accessibility / introspectability, extendability). They usually champion one aspect to the total detriment of the others - eg GNOME3, phone OSes.
Why would it be silent? The computer would hopefully show a small notification that the transfer is happening slower than it would with a newer cable
If the computer can identify the device’s capabilities (using some kind of bus command, I’m not very familiar with the USB protocol) separately from speed testing the cable+device combination, no reason why it couldn’t work for USB 2/3/4.
Of course, this approach relies on the device accurately reporting its capabilities, which may not be the case for the more cheap-and-cheerful gadgets.
In fact, my ThinkPad X1 Yoga has two USB-C ports with Thunderbolt icons next to them. They support USB 3.1 Type-C Gen 2 with Thunderbolt 3 (including DisplayPort).
How many PCI-E lanes does each port get?
Do the ports share the channels/lanes?
Do they both support Power Delivery? How many Watts does the thinkpad need to charge?
Even with 'clear' labelling, it can still be a crapshot as to the full capabilities of the device.
Right now I have an Uptab Mini-DisplayPort adapter in one port, and a StarTech full size DisplayPort adapter in the other. The Uptab adapter has its own USB-C input for power, so that is where the power supply is connected.
The mini and full size DisplayPorts are each connected to a 4K monitor, and I use all three - the two externals and the ThinkPad's WQHD. This all works great when the Uptab is in the rearmost USB-C port and the StarTech is in the other port. But if I switch the two around, one of the monitors goes into low resolution mode!
This kludgy configuration is just because these were the adapters I had on hand. There are a number of adapters that plug into a USB-C port and give you two mini-DisplayPort outputs - I have been meaning to try one out.
And therein lies the problem.
Used to be, we could call a USB port a USB port. Everybody knew that if the computer had the port, it could work with the peripheral; if the computer didn't have the port, it would not work with the peripheral. If the peripheral required different functionality then it required a different port that was equally straightforward and obvious.
Now, it's all supposed to run over USB - at least, over some USB. Consumers go out and buy a device and then are confused when it doesn't work. They know their computer has a USB port, but that's not enough. The port no longer represents a definite protocol, or even a definite power supply capacity. Are they supposed to check to see whether each of their USB ports says "USB 3.1 Type-C Gen 2 with Thunderbolt 3 (including DisplayPort)" next to it (which it doesn't)?
That pin is supposed to use data signals of less than two volts. The nyko charger uses nine volts. The switch has no over-voltage protection on that pin, and breaks at more than six volts.
The part about a slightly thinner plug isn't relevant at all.
The only outliers are moronic implementations like Nintendo, where they ask for capabilities, receive the charger capabilities and then just start pulling power at hardcoded levels even if the charger doesn't specify it supports those power levels.
But there's nothing you can do against that - you even with different port shape, a broken implementation can still fry your device.
I infer your point: the shape of the connector in theory is decoupled from risks in power delivery... but in reality the prevalence of cheap knockoffs and "moronic implementations" makes me question the wisdom of combining [potentially dangerous levels of] power delivery with data transfer.
Why would adapters necessarily not work in this case? It looks like you're claiming that translating one protocol to another is always impossible, but that can't actually be the idea... right?
In that case, you've described two situations:
1. Ports are the same.
2. Ports are different.
This doesn't seem like a particularly fruitful analysis.
The adapter is the downside. And there's no guarantee that the perfectly suitable adapter actually exists.
It matters little what subset of USB any given device supports. No reasonable person will expect, say, high-speed video from a cheap mp3 player, just because it's the same USB as their screen. However, it's perfectly reasonable to use the cable that came with aforementioned mp3 player to connect a computer to a screen, and expect the video signal to be carried through.
I don't see a good reason why cables shouldn't be required to carry everything USB supports.
> It matters little what subset of USB any given device supports. No reasonable person will expect, say, high-speed video from a cheap mp3 player.
No, it's both. Laptops ship that have one USB-C port that will carry DisplayPort alternate mode, and another, equivalent looking port, that won't, because it's not wired to the GPU. Consult a manual to figure out which port or ports is wired for video.
Your Nintendo Switch will work with an DisplayPort demuxing dock to output an HDMI signal, but put the forthcoming Nintendo Switch Mini, with an identical looking port in the same dock, and no video will output, because the mini lacks a DisplayPort crossbar mux IC connected to the port, and only uses it for charging and USB-C peripherals. This is a frequent source of complaint on gaming forums.
Laptops ship that will charge off of one USB-C port but not another, because the manufacturer saved a few cents by only wiring one for the optional high-speed charging spec.
Etc. Etc. Etc. While some things obviously don't support some USB 3 or USB-C optional features (obviously my USB-C keyboard does not output DisplayPort), consumers are often frustrated in other cases when their intuition and the equivalent looking ports suggests something should work when it doesn't.
The solution is to mark ports with symbols. It already happened with USB 2.0. I remember charging and not charging ports there, with a label on the charging one. Furthermore people learn with quickly which port does what on their laptop.
On the other side, is this phone I'm writing on capable of video out? No way to know unless I google the spec. I think it does, but I don't remember. It's not I'm doing that every day, or every month.
that one cable to bind them all.
Have you seen the prices for VGA cables, cat5, etc. at Walmart, best buy, etc.? They absolutely will jack the price up because they know the only people who buy that stuff there are ignorant or too desperate to wait for shipping.
My work desk has a built-in USB-3 C port, and says it supports USB Power Delivery. The desk is plugged into the wall. Excellent, I can use that port to charge the MBP, right? No! Plugging into that port will cause OS X to believe it's connected to a power supply — so the battery is "charging" according to the OS. Even more fun, is after leaving the laptop in this setup for some time: there's no visual indication in OS X that the battery is about to die and AFAICT, OS X would happily take no action, since it believes it's connected to a power supply¹.
Why? Because the USB "power delivery port" only delivers 10 W. (A normal MBP power adapter, connected to a wall, can deliver like 80 W. How much the laptop needs is dependent on use, but in my case, it's >10 W.) And AFAICT, the desk is compliant with the letter of the spec; USB PD ports are allowed to negotiate whatever power they can deliver; it just allows a device to draw as much as possible/needed, but doesn't guarantee any max rate.
Why anyone puts a max 10 W port on a desk basically hand-tailored for the SV 4-ft desk open office "experience" is beyond me, though.
¹and, technically, it is.
AFAIK, macOS is aware of three different states: 1) no power connected, 2) connected to power (but not charging), 3) charging battery.
2) is indicated by the black background of the battery charging icon, and 3) is indicated by a white background of the battery charging icon.
In the case where the power source can't deliver enough energy for charging (in your case), it is in state 2); and I'm pretty sure macOS always takes action when the battery is about to die unrelated to the state. It'll backup the RAM when the laptop's battery level is dangerously low and shut off itself, and will be restored later.
So, you're correct, and I actually did not know this when I wrote the post you've replied to.
But the point still stands: regardless of which of icon 2) or 3) you get, it still isn't going to alert you to the issue: both are normal icons. 2) appears during a full charge, since then you are also not charging, but connected to power. That is, regardless of 2) or 3), both icons also appear during a normal, actually charging charge cycle, so neither indicates an issue.
> I'm pretty sure macOS always takes action when the battery is about to die unrelated to the state
As I recall it, it was awfully close. But that's not to say that there wasn't still time, and that the OS couldn't have acted. (I didn't drive it to empty to find out!) So, it could be that I'm wrong on this point.
If the people are the same, you have answered your own question. Open Offices are not a bright idea in the first place; not sure why you would be expecting anything other than similar ideas :)
Seems simple to me: it costs a lot less to put a cheap, crappy 10W port there, and then they can advertise that their overpriced desk has this fancy feature. The idiot managers at tech companies (probably non-technical HR types) who make the purchasing decisions don't know any better.
When buying cables from a reputable manufacturer there doesn't seem to be much of a problem. There are only two figures that matter: the speed they are rated for and the amperage. A good manufacturer will state both clearly on the package.
But in reality, if you don't want to pay $50 that Apple or Belkin charge for a cable, it's a nightmare. (I assume manufacturing cost couldn't be more than a 10th of that.) I've yet to buy a third party USB-C cable that has lasted for 100 insertions (they are supposed to be rated at 10K insertions) - and I must have bought 20 of them by now.
I have no idea how to buy a reasonable quality USB-C cable. I've resorted to treating them like consumables - buying them 5 at a time, and ensuring I always have a few spares in the house. It's doubly frustrating because the OEM cables that come with the devices are invariably good, so they are out there, I just don't have a clue how to find them.
There is no level of manufacturing scale that will make an active signal cable cheaper than an equivalent-length passive one, and cable manufacturers, who live and die on razor thin margins, are never going to be convinced to forgo the cost savings of building cheaper passive cables when their client's devices don't even use the higher speeds the active circuitry is required for.
> to forgo the cost savings of building cheaper passive cables when their client's devices don't even use the higher speeds the active circuitry is required for.
That's why I think standardization should disallow this very attempt at costs savings. A given device may only need a passive cable, but as a user, I want to be able to use that cable with a different device. Moreover, cables are often bought separately (e.g. replacements). It's already a huge problem with USB 2/3 - buying a cable is a lottery wrt. transfer rates and charging amperages.
(The last time I lost that lottery I happened to be in China, so I went to a local seller with a voltage measurement app, and started testing cables they offered one by one. After finally finding a cable that could push proper 2+ amps, I bought something around 60 of them, because back home, everyone sells USB cables packaged in hard plastic, so I wouldn't even be able to run this test.)
> That's why I think standardization should disallow this very attempt at costs savings.
There's a somewhat famous Common Lisp FAQ along the lines of "Why didn't the standardization committee just force implementers to do X thing?"
And the answer is: your premise is wrong. Standards bodies don't have the power to force anybody to do anything. If implementors don't like your standard, they'll just ignore it. Standards live or die by their ability to raise consensus.
If Generic Device Manufacturer #4823738 looks at the USB 4 standard and decides there isn't a big market for USB peripherals whose 2 meter charging cable (say, a gamepad that charges via a fairly long 5 W USB, like a gaming console uses) runs $75 because it requires fairly heavy gauge wire to additionally support (unneeded by the gamepad) 100 W charging and has to contain (unneeded by the gamepad) 40 Gbps active signalling circuitry, they're just not going to implement the bloody USB 4 standard. They're not going to implement it if there's a $1 to be saved by going some other route, honestly.
Instead of the "universal protocol" dream dying on the rocks of a hodge-podge of subtly incompatible cables, you've simply killed it by convincing manufacturers to ship USB 2 devices until the end of time, instead.
The standard committee can impose onerous, financially punitive requirements on anything calling itself USB 4, but they can't then make manufacturers adopt USB 4, so in the end they can't solve this by forcing anyone to do anything, which is why the entire idea of "one universal protocol" is hamstrung by its own internal contradictions.
I think the solution is to improve branding. When I look at the USB-C cables in my bag, I can't tell at a glance which ones are full-featured: one of them has a tiny logo that maybe indicates something like that, but it's not obvious or prominent. I tend to end up plugging in the wrong one and discovering that the device isn't charging, performing as well as expected, etc. If the USB spec defined consistent, prominent branding for full-featured cables, this would be much less likely to occur. (yes, I know I should replace the inferior cables, but I only use them when I work away from home, which is infrequent enough that it hasn't crossed the threshold to be worth the time)
Any company that tries to compete by upgrading all their output to the top 25% of the market, paying materials costs at the top 25% rate across the whole production run, is committing economic suicide. All it takes is one competitor to not do that and only address the low end, and the whole strategy collapses in ruins.
That's why I was talking about putting this in the standard, so that only one type of cable is considered compliant. This way, all competitors would have to upgrade too.
Prices are to some extent arbitrary. I'm pretty sure people with limited income would be able to buy devices with slightly more expensive cables just fine, especially if you remove the backpressure of a reduced capability alternative keeping the price higher.
why is it so bad if they stay on the old version? AFAIK, usb already bends over backwards to support every older version of the spec. imo, it would be great if USB3.x supported all the latest extensions, USB2 was still kinda fast, and USB1 could still be useful for that really old device you have sitting around (or if you just want to be able to trickle charge a phone with the only micro-usb you have around).
Trusting vendors to be honest in declaring capabilities is not always the best of all possible approaches, historically.
Going against that might be possible, but it strikes me as perhaps difficult to get implementors to go along with.
USB4 is tremendously more dangerous if the cable does something really wrong.
I'm suggesting that they could mostly do fine with lying blatantly about capabilities in hardware. It would take very close analysis to find that it only had some of the capabilities advertised.
What's wrong with that? Just create a little spec addendum for USB 2 that officially allows the use of modern physical connectors. It would be so much better in any metric other than deliberate customer confusion than a big, bold "USB $some_high_number" claim with practically invisible fine print admitting that it only supports some obscure subset of the new specification that happens to be exactly USB 2.
The partition happens no matter what.
Yep. Significantly, their incentives are the exact opposite of mine as a consumer.
Something was systematically different between the USB-IF and the PCI-SIG, and whatever it was persisted over many years... and likely continues to this very day, if outward appearances are any indication. If anybody has insight into what makes these groups different, or even just speculation, I'd love to hear it.
USB is a great standard. Or at least, was until 2.0. It was ahead of its time in performance and the same interoperability principles carried it forward for over 20 years. If you compare the first USB standards to other standards of the era, the amount of thought that has gone into it is fantastic.
The fact it has no rivals even today is telling.
Compare it to the clusterfuck that's Bluetooth, which took well over a decade to get to a point where its kinda reliable (and it can still vary between implementations).
Unfortunately USB became too complicated with the various modes and backward tidbits it now supports. Also, the rates it now supports have taken it to a whole new level of implementation complexity (ICs, board design, power options, etc).
Fortune had nothing to do with it.
USB's deliberations on packets were distinctly underdeveloped. PCI's were well developed. USB got it wrong. PCI got it right. The compatibility complexity explosion in USB and lack thereof in PCIe is a direct result.
This pattern continues across every single stratum of the standard that I dug into. Why?
(Every single stratum except the app layer. As you note, PCIe does not operate there and USB does.)
> It was ahead of its time in performance
I thought USB was always in the value segment and never competed on performance. That's the stated intent and that's what I remember: it was never the fastest external port on my computer.
> If you compare the first USB standards to other standards of the era, the amount of thought that has gone into it is fantastic.
My entire complaint compares it to PCIe and notes the comparative lack of thought.
> Compare it to the clusterfuck that's Bluetooth
Agreed, Bluetooth is worse. Maybe clusterfuck is the default state of affairs.
The question, then, is if there's anything to be learned from the USB/PCIe dichotomy. Is there an Engineering Steve Jobs keeping the PCI-SIG in line? Does it have fewer members? Is having one dimension on which you do not compete in the value segment critical to maintaining overall quality? I feel like there's a case study to be had, but I don't know enough to rank the causes or generalize with any certainty.
There are also some dubious technical decisions, even in USB 2.0. Toggle bits -- sequence numbers that have been degenerated to a single bit -- are a frequent source of headaches for device implementors, but don't provide the intended integrity improvement. The VID/PID design seemed like a good idea in the USB 0.9 days, but it soon became clear that a large number of implementors would be making very different devices based on the same silicon, and the standard never evolved. Instead of developing a proper Battery Charging device class, USB-IF gave their seal of approval to the goofy schemes the wallwart manufacturers had come up with.
And "USB as she is spoke" is worse: high-power bus-powered devices claiming to the self-powered in their descriptors, Hi speed devices with ceramic resonators, hubs that source current into their upstream ports, devices with USB A receptacles and A male to A male cables, USB-RS232 bridges that require drivers despite the existence of the CDC ACM device class, crimped shield termination on A to B cables that will satisfy EMC standards at the time of manufacturing but not a month later when the cable reaches the consumer... all bearing the USB logo... and USB-IF does nothing to address any of these situations, only caring about getting their annual blood money from licensees.
Firewire was better
Firewire was faster but it wasn't better. It could only handle the minority usages (external storage), but not the majority ones (mice & keyboard - which were always wired back then).
Firewire also had hellish compatibility, whereas things like USB HID was amazingly plug & play in an era where that really didn't exist. And early on in Firewire's life it made the mistake of changing physical connector and not being backwards compatible.
I'm not sure why you say it could not handle mice and keyboards - pretty sure it could but just were never used for it due to license costs.
> Firewire also had hellish compatibility, whereas things like USB HID was amazingly plug & play in an era where that really didn't exist. And early on in Firewire's life it made the mistake of changing physical connector and not being backwards compatible.
I never experienced compatibility issues with it. The S800 standard was backwards compatible, sure they used different connectors but you could have passive cable that converted S800 port to S400 and plug that into a computer that only supported S400.
>USB requires the presence of a bus master, typically a PC, which connects point to point with the USB slave. This allows for simpler (and lower-cost) peripherals, at the cost of lowered functionality of the bus.
Firewire could power a 2.5" spinning hard disk. (usb could not provide enough power)
You could hook two macs together with firewire, and turn one into a hard disk via target disk mode. (peer to peer)
Firewire could transfer video from cameras without dropping frames. (guaranteed bandwidth)
On the other hand, USB was always the cost king.
I believe there was a $1/chip royalty for firewire, and that limited its growth.
USB in practical terms has caught up.
USB was made mostly by Intel, and they intentionally made it "dumb" so that the CPU had more work to do, in order to push high-performance CPU chips. But of course, the big factor was that license cost, which made Firewire devices cost more, whereas USB was very cheap to implement.
Did it? I know it was not used for it but I think this was more because of the patent license cost and the relatively low cost of mice and keyboards which made the license cost of $1 a big chunk of the total cost.
> It was much more specialized than USB
Not sure what you mean by this, if you mean less mainstream, yes it was less mainstream, still better. Support for DMA, memory mapped devices, daisy-chaining, peer to peer. It really was revolutionary and still is.
>You don't have direct interrupt lines like in PCI.
Surely the people who've designed USB could have opted to add an interrupt signal if they had deemed it necessary. Sure it would've added one more signal but given that there were only 4 pins on the original USB it doesn't seem overwhelming.
That being said I agree with you that dismissing polling wholesale without digging deeper is quite silly, it can be a problem when it increases latency or wastes CPU cycles but as far as I know latency isn't much of an issue in most uses of USB (even for things like keyboards, mouses and controller it's usually dwarfed by the latency of the video pipeline) and I doubt anybody handles the USB phy in software so CPU usage shouldn't be a worry.
"Polling" doesn't involves the host CPU, adds no latency. It is done by the USB controller, which sends poll packets to the device. This is all a philosophical decision by USB which is master/slave and doesn't support anything asynchronous by the slave.
The master can choose to ignore certain slaves on the bus if it desires, which isn't always possible with an asynchronous slave. Recall that many devices dangle off a single USB chain.
I think it wastes power. On a setup with interrupts, you could clock gate the host controller and if the device needs to say something it can raise the interrupt line and wake up the host. I don't think there's a way to clock gate a USB1 controller and then do e.g. wake on LAN.
The reason is that PCI is multi-master by design and PCIe uses duplex packet-switched serial links. There is no master who decides who gets to use the bus at given moment (for parallel PCI there is, but it is relatively simple circuit that is conceptually part of the bus itself, not of particular device connected to it).
There are a lot of very smart people in the world that lose sight of simple concepts and their value, however.
My company and another had opposing solutions to a problem, which made opposite tradeoffs and opposite implementations (ours worked better for a more HW-focused solution, theirs for a more SW-focused solution)
How does one choose? In the end, it's backroom negotiation, diplomacy, and compromise. In other words, politics.
A committee of smart people will find the most elegant solution to fuck up a problem.
There are some smart people there, unfortunately some are old-timer in large orgs - they were pushed there "to do least harm".
Didn't care less about how bad some of the proposals were, as long as they were according to the required format.
We tried to fight a bit, but it was a lost cause :\
I’m convinced it’s so they can advertise as supporting USB4 when they really only support the slowest (aka cheapest) method
Vendors in turn can charge more.
1 - non-mandatory labeling. What does a given port support? What about a given cable?
2 - the name "USB4 Gen 3×2". Honestly I have never had the faintest idea what any of the various absurd names invented by the USB IF might mean. Superspeed? High speed? USB 3.1?
USB 3.0 was renamed to 3.1, but that was confusing, conflicting with USB 3.1, so it was rerenamed to USB 3.2 Gen1, so that people understood that 3.0 was ACTUALLY the first generation of USB 3.2
But that caused problems with USB 3.1, which was faster than USB 3.2 Gen1, so it was renamed USB 3.2 Gen2, so that it crystal clear was faster than USB 3.0/3.1/3.2 Gen1.
But that caused problems with USB 3.2, because 3.2 is just two lanes of USB 3.2 Gen2, so 3.2 was renamed to USB 3.2 Gen2x2 so people knew it was TWICE as good as USB 3.2 Gen2.
The problem now is that USB4 eliminates all that clarity, so right now the big marketing push is to clarify it as USB 3.2^2 Gen1: The Reckoning.
To resolve the conflicts of historical naming with USB 1 and 2, those will be renamed USB 3.2 Gen √0 and USB 3.2 Gen √-1/pi.
First there was USB 3.0. Technically the standard included older speeds, but people called the new one "3.0" and all was good.
Then 3.1 came out. But what about the poor manufacturers with 3.0-speed ports? Well, someone decided they could call those ports "3.1 Gen 1".
And it was all downhill from there. 3.1 Gen 1 became 3.2 Gen 1, and now becomes USB4 Gen 1. They added more "Gens", even ones where the word "generation" makes no sense. Now the number tells you nothing about speed, and there are two separate nomenclatures for speed that are both confusing.
And I'm still not sure if they did or did not ever implement Gen 1x2...
(It being USB4 instead of USB 4.0 is another stupid poke in the eye on top of everything else.)
This. I wish to nominate the lot of it for BJAODN
* 5 Gbps transfer rate: “SuperSpeed USB”
* 10 Gbps transfer rate: “SuperSpeed USB 10/Gbps” - applies to both Gen 2x1 and Gen 1x2.
* 20 Gbps transfer rate: “SuperSpeed USB 20 Gbit/s”
I have never been able to differentiate between "Full speed", "high speed" and "super speed" though I guess they are each different from "low speed. Of course there are several speeds all labeled "SuperSpeed+"
USB 1.1 supported low speed (1.5Mb/s) and full speed (12Mb/s). USB 2.0 added high speed (480Mb/s) but didn't make it mandatory; you can have a USB 2.0 compliant keyboard that only supports low-speed or full-speed operation. USB 3.0 did the same thing with the introduction of SuperSpeed (5Gb/s).
What they should have done was keep making point releases on the 1.x, 2.x etc. specs when they added things like On The Go and Power Delivery, but keep the major version number representing the maximum supported link speed. By that scheme, we would have all 3.x devices capable of 5Gbps and no more, and what's now being introduced as USB4 would be more like USB 6.5 (if the minor version numbers were kept in sync across the speed grades). But this would deprive many companies of marketing opportunities, and instead force them to advertise USB 3.x support years after USB 4, 5 and 6 come into existence.
It's so fast, so consumers have enough time to spend additional 2 seconds to pronounce the whole "3x2", "Gen" or other marketing gibberish. Not saying about good connotations of SS abbreviation (as in superspeed logo) in Europe, especially in Poland. /s
Jokes asides, but what is a reason of using 3.1 name for USB-C connector? It is very, very different how it looks, how it can transfer power than early rentangle-shaped connectors of 3.0's. And now again, USB-IF serves now the same drama again with ambiguous name. I'm glad I am techie guy, but I believe 80% people will end with question what they really need to connect device with their laptops.
I wish we could completely drop "superspeed"/"gen 2" and use way simplified name. Such name should not follow the naming that WiFis has now, like WiFi 5 or WiFi 4, because even if we include identifier for people more interested in technical name (standard), like WiFi 5 (802.11ac), it still ends with situation you actually have portion of information, because 802.11ac operates on various powers and streams, i.e. WiFi 5 (802.11ac Wave2; 4 streams @ 1024-QAM), which is way different how it works than former example. However, I really like the quite recently introduced naming like: AC2600 or AC5300. When I someone mentions "WiFi AC 2.6k", it tells me a lot more about given device for a techie person and consumer. Also, let's take a look like we query data from databases, the AC2600 could be an unique indexed column (even a primary key!), it falls below of group within defined standard, has various parameters etc. etc.
On the contrary to WiFi, USB type is easily identifable by parameters like possible power delivery, data transfer and connector type (shape, size). Why USB-IF can't introduce similar naming that are clear for everyone to recognize? USB standards are not the same as an iPhone, they do not need any fancy name to be better than previous generation.
// Actually, I really got used to 802.11a/b/g/ac naming when it , it has really rooted into my brain, so it's very hear from me speaking WiFi-5.
// Actually, I really got used to 802.11a/b/g/ac naming (when those standarts had very distinct features), it really has rooted into my brain, so it's very hear from me speaking WiFi-5.
Turns out: My worst nightmares just became true. We now have running USB over Thunderbolt. I don't even know where to start facepalming about this clusterfuck.
Take DisplayPort for example: Up to now we had the DisplayPort alternate mode for USB-C. That just worked for most devices with USB-C. Now we also have the tunneling of DisplayPort over Thunderbolt (as Thunderbolt-enabled Macs offer since Apple introduced Thunderbolt in its devices and which they use when connecting to Thunderbolt-enabled monitors). Both methods seem to be optional for USB now, so in future a host and a peripheral device might support "DisplayPort over USB", but while one might only supports the USB-C DisplayPort alternate mode, the other one might only support DisplayPort tunnels over USB4, causing them not to be able to talk DisplayPort to each other.
I guess I'm going to facepalm for a few hours now, while I read the specification in more detail.
That sounds good from an implementors point of view, but offers another area of possible incompatibilities for connecting USB4 and Thunderbolt 3 devices. Imagine both devices support tunneling DisplayPort signals, but the USB4 device doesn't support the Thunderbolt 3 compatibility layer: As in the example before they won't be able to establish a DisplayPort connection, while other USB4 devices, which offer Thunderbolt 3 compatibility might.
DVI ended up being kind of a mess with the different connector types and ultimately fairly limited digital bandwidth.
Displayport is pretty good except everything Displayport costs 50% more than it should.
I know that many are backwards compatible, but they aren't forward compatible -- my MicroUSB cables don't help at all with USB-C, and type A and B connectors are still around.
By the time all of my micro USB devices have disappeared from my household, I'm sure a new thing will have emerged that's incompatible with USB-C, and so I'll still have to have several different cables around.
What really chafes me is MiniUSB. It was superseded by microUSB 12 years ago, but it STILL turns up in brand-new devices. I recently bought an action cam, a dashcam, and a retro input device converter that all had MiniUSB. I only had a single MiniUSB cable for my PS3 controller so I had to go out and buy a bunch of obsolete cables.
Reading threads where the independent maker of the retro converter was defending his choice, "I find MiniUSB is more sturdy". So now everyone buying his thing needs to stock another kind of cable for a device where the socket isn't even going to see much wear and tear. (and MiniUSB being more sturdy is debatable - the socket is designed for much fewer insert/remove cycles).
Also, a lot of devices like the Gopro and Tomtom came out when Mini was current, and kept compatibility for years to respect their customers who had invested in accessories and stuff, and didn't want all that to become useless if they upgraded to the new unit. I appreciate that.
Micro has rendered numerous devices irreparable, or not-economical-to-repair, because the device-side connector fails. Thermal cameras, audio adapters, countless cellphones. Thankfully my daily-driver phones tend to be Samsungs which keep their USB connector on an easy-to-replace sub-board, but I can't say so much for FLIR's finest. I've thrown out my share of flaky micro cables, too, but those are cheap to replace. It's the device-side failures that have rained on my parade for years.
Mini, on the other hand, has been bulletproof. Countless Beaglebone Black boards, FTDI adapters, logic analyzers strewn about my bench, my TomTom, my GoPro, even my old Blackberry was mini. And not a single one of them ever died because of a USB connector problem. (The magic smoke has escaped from more than a few, but that's another story entirely!) I've thrown out plenty of mini cables, sure, but the device-side connectors simply don't fail like their supposed design would suggest.
Regardless of their intent, I consider inclusion of a micro port to simply be a design flaw at this point. Mini or C is the way to go.
USB 1.0 - 1996
USB 1.1 - 1998
USB 2.0 - 2000
USB 3.0 - 2008
USB 4.0 - 2019
Honestly, I have a new Dell XPS, and in my opinion it has the ideal port combination: 2x USB3, 1x HDMI, 1x USB-C. I'm covered for the past, present, and immediate future. The only loser is a handful of peripherals that plugged into Thunderbolt on my old MBP (basically an Ethernet and Firewire adapter).
- my VGA port is an HDMI port (with a cheap adapter)
- my wall outlet is a micro-USB port (with a cheap adapter)
- my Lightning port is a headphone jack (with a cheap adapter)
Adapter-based compatibility is so convenient.
> They're also USB1 and USB2, and USB3
> work with like every peripheral made for the last 20 years
too bad there is no guarantee they will work with devices with Type-C ports as the type-C port is a mess and there is no mention on the standard on how to communicate capabilities of the port. Can it do thunderbolt? who knows?! can it charge the device? who knows?! Can it run PCIe lanes? who knows?!
Communication of port capabilities are described in the spec; not sure about cable capabilities though.
Here's what the Android prompt for this looks like: https://www.quora.com/Devices-can-charge-or-be-charged-via-U...
Taking my time? The computers work exceptionally well. USB 3.0 doesn't seem reason enough to replace them, and much less does USB C which is not compatible with any of my devices.
Shrinking these connectors has gone too far.
An argument I've heard against this is that a rotating connector wouldn't be electrically stable enough. But this is easily addressed - you could easily add a toothed collar to the plug preventing it from rotating, at the small cost of there being a finite number of dozens of possible orientations, instead a continuous infinity. Or better, just design the protocol to be tolerant to packet loss.
Analog audio technology was basically perfected 40 years ago with the Walkman. You couldn't develop a significantly better connector today, even the size was optimal for small portable devices.
Data transfer technology OTOH is still improving by leaps and bounds, and the old ports are too big for modern portable devices like tablets and phones.
NEMA 1-15: 115 years in service.
IPv4: 41 years in service.
IPv6: 24 years in service.
Seatbelts: 53 years in service.
Some things are better not replaced every 8 years.
Though there isn't a standard. Different manufactures use different connectors, not to mention air planes.
It is required to use an USB 2.0 port to get surround sound.
USB 3.x is also unreliable to no end. My external disks, all using USB 3, only worked with the very fast speed for a week or so. Then the cable dies and it is basically just as fast as USB 2.
So, 8 years of unreliability. What a terrible record.
The awesome thing about USB-C is all you need is a couple USB-A to USB-C adapters, which are cheap, and bam—all your devices are now compatible forward.
And if you already have a USB hub all you need is one adapter.
The reason it is forbidden is because using two adapters you can convert a USB-C cable into a USB-A to USB-A cable. USB-A ports are not required to handle exciting and unexpected voltage on the wrong pins.
- Can I use friends'/family's chargers? Can I use public chargers?
- Are my cables going to become difficult to find? Can I pick one up at the corner store in the middle of the night because mine failed?
- What's going to happen when I need to interact with old hardware? Are converters available?
Compatibility problems don't arise in perfect conditions, when you're at your desk with all your own current equipment. They arise in all the edge cases you will have to deal with over the at least several years you have a device.
Intermediate hubs will downgrade bandwidth if they can't support the same speed as their upstream host so there will be an element of mystery there without clear markings.
A standard that is actually standardized in practice.
To this day I still don't know exactly how to buy a cable for my raspberry pi 4s. (Yes more a USB c than USB 3 issue but you get my point - a standard that isn't standardized fails at it's raison d'etre)
Now, why that passed muster is a valid question.
If that's wrong, can you clarify?
It was an example of "layman has no idea which side is up"...but yes the rasp is the culprit here...for not following a rather complicated standard.
It needs more simplicity.
Like the rasp issue - the cables marked active work while the ones not don't. Or vice versa? I don't recall. Never seen any such markings - but apparently there is a distinction there that some how interacts with raspberry not having enough capacitors.
^^ See that paragraph of confusion? That's what standardization chaos looks like. Neither I nor apparently raspberry foundation know whats going on.