That's the most important thing to know is what's optional in the spec.
That could make it great or a major pain. I'm referring to how many features the specification defines as "optional" for manufactures and that the USB organization requires for logos.
This is why even USB-C is a nightmare. Consider a USB-C cable or device port. It could be charge only, data only, monitor support. There are even more permutations and sub-features. Oh, you wanted charging and found one that has it? Know how much power it enough or overkill? It could provide 15 watts, 100 watt, etc. Think you'll just google the specs page? Sure, they never miss providing any of these details or make any mistakes.
Whether USB4 means one thing with nothing optional (or at least a very small number of combinations), will probably determine how much you like it or get annoyed by it.
HP even made it worse with a laptop USB-C port that could technically be used for certain docking functionality but tried to fud-deny allowing it for marketing reasons.
USB-IF is for manufacturers, most of whom want to do whatever the cheapest quickest thing is. The user experience absolutely comes second to manufacturing cost and marking convenience.
The naming confusion exists because cheap manufacturers don't want to be branded as the "lesser" product or be offering a "lesser" computer. The confusion aids them. They can ship a slightly cheaper computer or device but market it as USB 3.2 Gen whatever, despite it basically being a USB 2 device.
All features will be optional because mandating features means the chip costs $0.02 more, the cable they slap in the box would be more expensive, etc and they don't want to pay it.
So you're left with a choice in this wonderful alternative non-optional world: some expensive temperature controlled oscillator with accompanying nuclear power plant integrated into every mouse, with a dual core processor just in case the host wishes to speak mouse-over-Thunderbolt, with a 240V connector on the mouse just in case the host wants to charge from it, or mice living on a separate low throughput bus where such requirements don't exist. We had that already, it was called the 1970s-1990s, it was an even bigger mess than what we have now, and is exactly what USB's mandate is to avoid
It's not like this is a new problem to USB, it's been a tri-modal specification from the outset to cope with completely different requirements of the peripherals that were to be unified, these horrible recent complexity outcomes are just a natural extension from the early days.
It wasn't so long ago that every budget peripheral manufacturer outside of e.g. printers and mice were forced to bundle expansion cards to implement custom busses just so you could talk to their e.g. scanner. This was still the reality of things as late as 1995 or so. Here we didn't just have custom connectors on the back of the machine, but custom cards that had to be installed to implement those connectors. In the 24 years since 1995, outside of display interfaces I count only 2 major new busses to date -- Firewire and USB. That sounds like a success to me
If fact I would certainly be all for a few basic well thought out combinations as mentioned:
>>one thing with nothing optional (or at least a very small number of combinations)
The problem is, that's not what USB-C/USB-3 whatever is. It's permutations must be in the millions at least, literally. That degree of complexity is simply not necessary to satisfy (forgot 80/20) 99% of users. The 1% (like people here) as usual would have more specialized/flexible/technically oriented options.
It's not comparable to counting number of buses or standards. It's more about the sum total of complexity and confusion within a generation of an ecosystem. Especially when manufactures could have better profits and simplify the equation quite a bit by taking looking at things longer term and as an ecosystem.
It's ironic that often the "savings" you mention individual manufacturers are trying to achieve is a false dichotomy, which becomes visible only when they are willing to take a step back and look at things more holistically.
Often what appears to be only benefiting another company...well you know, the old rising tide lifting all boats thing and so forth and so on...
You've also got cable lengths which I don't believe are limited by the spec but technologically limited. For example you can by 10 foot cables on Amazon but I believe it's currently impossible to buy any 10 foot cable there that has all the features supported that the spec allows for.
Effectively this adds just as much complexity as something formally defined as a feature because end users have to deal with the additional variables just the same.
I'm really trying to avoid doing enough research and combinatorics that would allow for an exact number but look, we're already at thousands which is ridiculous and you know how quickly the numbers grow with only a handful of additional headaches added in.
We're also not counting edge cases like Apples Adapters that are supposed to allow an iOS device to charge while using USB accessories. That's another niche batch of secrets introduced with the 2018 iPad Pros. It only charges using certain combinations not simple requirements like having enough power.
- Power Delivery supports some 5 different maximum values
- 3 different Battery Charging modes
- 5 alternate modes according to Wikipedia (C only)
- >10 different plugs (not an independent variable though)
- 6 different signalling rates
All of those are baked in in the hardware layer. It gets worse when you realize that cables have to support the right combination.
It gets even worse when including things that piggyback on USB, like dual-purpose HDMI and USB ports, Thunderbolt, noncompliant chargers (Apple), Quick Charge, audio alternate mode.
If it makes you feel better: you can make an ultra cheap type C cable that's basically the same as your old Type A cables and will act like your old fashioned cable except you can plug it in either way in any port.
Then force them to add appropriate clear markings (of the capabilities in a very simple format), or hit them with fines and deny them the ability to sell to the market without them.
The worst part about USB-C is that it can be implemented so wrong it bricks your device! A Google engineer for a while was testing cables and not infrequently they worked poorly, not at all, or outright fried devices. Granted this was years ago, but now with DisplayPort and Thunberbolt 3 over the same connector, the situation has become even more vague.
The miswiring that bricked his device, swapping GND and VBUS, could also happen with older USB plugs. There's nothing specific to USB-C about it, USB plugs have a defined polarity for the power pins since the very first version of the standard.
Either scenario, considered alone, is enough to ruin a downstream device. I don't think it's reasonable to worry about the combination of the two as different from either alone.
No spec is going to look good if you start blaming the spec when manufacturers sell things that don't conform to it. That is what the Google engineer found: these cables didn't conform to the spec. If you are going to do that then bricking the device is the least of your worries as USB is clearly to blame for this as well:
The real USB 3.2 issue is different. As a way of telling the user what they are buying, the USB 3.2 2x2 scheme is so bad it's almost comical. They could have insisted every device be marked with the maximum it actually supported. I guess they thought making something like this mandatory:
[x] 5 Gbps
[x] 10 Gbps
[ ] 20 Gbps
[ ] Power Delivery
[x] DisplayPort Alternate Mode
They should be pressured (by law) to include all these details in a simple form (akin to nutritional labels in foods), either on the cable itself or on a sticker attached to it.
Ideally there should also be a few profiles, so that you know if you want a cable for your monitor, you get e.g. a "USB4 / profile 1" (and the cable could mark the profile it belongs to on top).
But no, thanks to their long-established BS you slap a USB C connector on that 12Mbit/s connection and congrats, you've got a USB 4 mouse.
No, but they could build cables with a "40Gbit link speed" that also work with any much smaller speed mouse.
Devices don't have to have the superset. But if a consumer wants to buy a cable that has the superset of powers, they should be able to find such cables and buy them.
(Plus, what low throughput device has low latency needs that can't be covered by cable able to push data to a 4K monitor at 120 hz or more)
To saturate a 40Gbit/s link you could send a 1kB update every 200 nanoseconds (that's 5MHz), assuming perfect efficiency.
Same for USB-C. It is just a standardized connector with a defined pinout.
If you force the standard to always provide 15 watt, then how will you do that on phones or a raspberry pi or other low power devices? If you force SuperSpeed or SuperSpeed+ over the connector then you won't see it on budget or midrange phones.
It makes perfect sense to differentiate between the supported protocols and the connector itself. Otherwise you would need a ton of different connectors that you may never need on your device.
For everything else the protocols are forward and backward compatible, so you just plug in the cable and it works.
My point is - the standards might be back and forward compatible. But the cables and ports definitely aren't - and your average consumer has absolutely no way of knowing. They look absolutely identical on the outside and when they don't work it's for completely non-obvious reasons.
1. the same connector is used for a wide variety of capabilities, and it can get confusing knowing what you can use together.
2. the names of these connectors is confusing.
You're complaining about the first, and that's a very hard problem. There are huge advantages to using the same connector and cables that can support 100W of power, 5K displays and also support peripherals that retail for less than $1 and be backwards compatible with peripherals over 20 years old. The disadvantage is the confusion you mention.
The complaint in the article is about the second, the stupid naming. There's no good excuse for that. It really exacerbates the confusion from the first issue.
But now it's on us and journalists just to not use the stupid naming. They've provide "marketing" names that aren't silly, so everybody should just use them, even if it's awkward. IOW "Superspeed USB 10Gbps", not "USB 3.2 gen 2".
This, quite honestly, sounds wasteful. I am fairly certain that manufacturing these connectors that are much better is more expensive than either the old, less sophisticated, connectors or simply new lower performance connectors.
Well In your example the Connector and Cable, specifically cable will have to be an expensive PD 100W Cable for it to correct, if the cable were designed for the $1 peripherals it wouldn't work with the 100W and 5K Display.
The sentence would have been correct without the word cable, but without it the argument wouldn't stand. Because you just can't find a Cable with the same Connector and expect it to work with 5 / 10 / 20 years peripherals because the cable didn't support it.
In fact, with USB the situation should be better, since all cables except the slowest ones are supposed to have a built-in chip describing how fast they can go. So in the MacBook example, in theory it should be able to detect the issue, switch to a lower resolution, and present an on-screen warning telling you what happened and why.
I should be able to tell by looking at a connector:
* If it's claiming to be a certified cable/device or not.
* The USB standard it conforms to (version)
* The speed of data transfer it supports
* The maximum wattage it supports
* The maximum voltage it supports
* (the connector type, but this is obvious based on shape)
Physical connector technology and signaling protocol are entirely different things. I'd be fine with literally everything using the same physical connector, but knowing that I can't and shouldn't plug my 5VDC battery-charger into a 110VAC outlet.
I still have speakers that are connected by stripping a lampcord pair with my teeth, pushing a little button, jamming bare copper strands into a hole, and releasing a little button. I think you're asking for too much. Everything using one reversible connector that almost always guarantees 5VDC in a fallback mode is better than what we have today.
Can you tell by looking at it what speeds it supports? Is it USB-3 or USB-2? How about the power delivery aspect, how much power is it cable of transferring?
You might be able to tell some of those things from modern cables, if they're bragging about a speed factor, but otherwise it isn't clear.
Yes, I can determine what it is used for, because I bought it for a purpose. What you are describing is the same issue with barrel-style AC or DC wall warts, and those powered the world for a few decades just fine.
So yes, I bought a USB-C certified charger, to charge things. What does it do? It charges some things when it wants to, without providing any kind of indication why it doesn't work when it doesn't.
(The monitor should have a compatible cable bundled with it though. There's some major cable length limits and shielding when it comes to the highest speed stuff like monitors and eGPUs with USB-C.)
(I think the question you meant to ask is why the MacBook can't drive an external 4K Thunderbolt display using its included cable, which is more interesting—IIRC, it can do so through an HDMI adapter plugged into the USB-C port, but it cannot do so over a USB-C cable. This still isn't about the cable, though; you can take the MacBook's cable and use it to connect an MBP to a Thunderbolt display just fine. Instead, it's about the MacBook's combined Thunderbolt/USB-C controller not supporting the recent-enough version of Thunderbolt to have the bandwidth over the wire required to feed a 4K@60Hz display. When an HDMI dongle is plugged into the USB-C port, you're taking a direct GPU->controller->HDMI path, which avoids the anemic old-Thunderbolt bottleneck path.)
Less extreme example is how the OnePlus USB-C charger won't charge the Nintendo Switch, even though they are both USB-C certified devices.
Edit: sorry, I just realized that I could have been misunderstood. I forgot that "MacBook" is a device that exists. Obviously(to me) by MacBook I mean the MacBook Pro.
Apple don't make a "USB-C data" cable - they make a USB-C charge cable, or a TB3 cable, and the TB3 one has the thunderbolt logo on each end.
They also refer to it specifically as "USB-C Charge Cable", everywhere.
IIRC it was a deliberate decision by USB-IF to not enforce any additional labels on USB cables and connectors beyond the USB logo.
Though, I do agree, having clear information directly on a USB Type-C cable on which features are supported would be much appreciated.
The USB logo on usb1.0 cable should be printed on the "upper" side. How many times did you inserted it the wrong way?
When USB 2.0 was introduced, it was a huge increase in speed. USB 1.1 is 11 Mbps whereas 2.0 is 480 MBps (theoretical). When this happened some camera manufacturers (Nikon I believe was one of them?) were identified as changing the marketing on their cameras to "USB 2.0". What this actually meant though was "USB 2.0 Full Speed", which is USB 1.1 speeds. USB 2.0 Hi-speed is 480.
Consumers don't need this or want it. Manufacturers just don't want to say "USB <not latest version>".
You see the same thing with AT&T deciding its LTE iPhones are 5G because they want their network to sound more impressive than it is.
It's also why the strength bars on your phone show the strongest signal (2G/3G/4G) that your phone is getting, not the one its actually using.
The truth matters. False advertising matters. Markets only work with a minimum level of trust. All of this legerdemain erodes that trust.
I'm Australian and this is one place where I think the ACCC (equivalent to the FTC/FCC rolled into one) has way more teeth than their US equivalents. The ACCC, for example, ended ISPs advertising "unlimited" ISP plans that weren't truly unlimited (which was basically all of them). This also covered ISPs with hard or soft data caps where throttling would come into play. The ACCC also took ISPs to task on the truth in advertised speeds for services (which matters are lot for xDSL).
A more sensible standard would've been something like:
eg USB 3.1C, USB 3.0A.
Or put the letter first.
But this whole USB 3.2 Gen 1 = USB 3.1 Gen 2 or whatever is just pure nonsense.
Plus the good thing about USB is it just works, you don't have to worry about any tiny subset (well that's the theory anyway).
I've had enough of my monitor, mouse, keyboard and printer all using different connectors thank you very much.
Edit: Making make sense.
At least with the different connectors and cables I didn't have to worry so much and things generally worked when plugged into the right place.
Personally, I've had enough with my 1yo laptop, my 3yo desktop, my 5yo peripheral and that funny dongle I need for work all having different USB cables. USB was meant to save us from having multiple connectors but has done the opposite: every year there is a new USB cable/hole design.
I cannot be the only one here who has to inspect every online purchase for high-rez pictures of the USB ports. I really don't care that my laptop has 10+Gb/w potential. I just need to know that my current equipment is physically compatible.
At the physical layer, is it the materials in the cables are getting better? Or new ways of using the same materials?
At the protocol layer, is it newly developed computer science theory being applied or is it just old fashioned pragmatic engineering iteration, looking at usage and making the existing protocols more efficient with tricks and shortcuts etc?
So is it not actually serial any more?
Parallel refers to taking a raw bit stream and sending N bits at a time down N wires to a receiver who reads N bits at a time and reconstructs the bitstream. This was a common technique in the early days of computing because it is very simple to implement and low clock speeds meant it was often the only practical way of increasing the throughput of an interface.
As clock speeds increased, keeping each of the N signals in sync became increasingly difficult. This eventually lead to parallel interfaces falling out of favor and serial interfaces becoming dominant.
The defining characteristic of a serial interface is that the data signal itself also serves as the clock signal which keeps sender and receiver in sync.
Modern "parallel serial" interfaces like USB 3 consist of multiple serial links which are multiplexed using a framing protocol of some sort. So from a signaling standpoint they are still considered serial interfaces, even though data is in fact being sent in parallel.
AFAIK, it's not, the SuperSpeed (USB3) wires have their own independent negotiation; the device switches to the USB2 wires only when USB3 is not detected. This also explains how the proposed VirtualLink alternate mode can have USB3 without the USB2 pair.
edit: yikes, looks like USB C doubled the number of those separate pairs, for a total of 6 data pins (2x RX, 2x TX, 2x USB 2.0 RX/TX)
Yes, and more expensive on that. Laptop makers totally hate usb 3.0+ for that.
The industry standard for internal digital connectivity was the FPC - flexible printed circuit board, a very economical and ergonomic solution, but then came USB 3.0.
Regular polyimide FPC can't reliably handle the high frequency signalling. Still, many laptop makers still use it, to regrettable results.
The few solutions left are the micro coax cable, running twisted pair inside the case, or passing it over a hard pcb bridge. All are very expensive, and hard on assembly lines.
This is why in most budget laptops manufacturers do a following trick – they put all usb 3.0 ports on one side, and 2.0 on the opposite. This way they only have to run the low frequency, low lane count usb 2.0 to other side of the laptop.
Only very recently, there came an FPC maker that makes custom cabling with some proprietary dielectric that can handle 3.0. And they charge more for it than for hard pcb bridge.
Even if the cable material issue is solved, you still have to find way to connect it to PCB. High frequency connectors are not cheap either.
Color me unconvinced on one point though:
> Only very recently, there came an FPC maker that makes custom cabling with some proprietary dielectric that can handle 3.0.
A brief glance at IPC-4203A (which has been around since 2013) tells me this is a consequence of market cost sensitivity, not one driven by the inherent performance limitations of available material.
The latest IPC-4203B was just published circa March 2018 last year. Is there something new worth looking into here, or is this "recent advancement" just a manufacturing process-driven one which has made high-performance FPC more viable in the throwaway consumer electronics market?
Each successive frequency increase has required more complex PHY design.
On the digital side : more sophisticated bit encoding schemes
On the analog side : tighter tolerances for clocking and signal jitter, more sophisticated transmitter and receiver technology (better emphasis/deemphasis, better equalization)
Here's taste of the PHY changes :
USB-C has 4 differential twisted pair data lines.
USB 2 has a single twisted wire pair, specced for moderate speed.
USB 3 replaces that with two pairs with much tighter requirements, one for transmit and one for receive. This involves a tenfold speed boost (.5Gbps to 5Gbps), with much more modern transceivers using much higher frequencies on much better wires. But it also drops the length a single passive cable can be from about 5 to 2 meters.
USB C doubled the number of pins while making the plug reversible, so they went ahead and made the special case of C-to-C cables connect to all the pins and have four pairs of wires.
But that's only a doubling from 5Gbps to 10Gbps. How are we getting up to 20 or 40? It's mostly coming from restricting cable length even further, down to 1 meter or only half a meter. But on the bright side you can get an active cable, with chips inside, that can get you back up to 1 meter or 2 meters or 60 meters. As long as you're willing to pay enough.
I have a couple of situations I would like a 7m-ish Thunderbolt 3 cable but can’t seem to find one, even for the multiple-hundreds it would probably cost.
It's really disappointing that Display Stream Compression hasn't been built into everything for the last 5+ years, and is only now barely getting into equipment. 2:1 or 3:1 compressed display data looks fine, and being able to add more pixels or bit depth or refresh rate far more than makes up for the tiny losses. Without it, tons of screens are stuck at significantly reduced framerates or 4:2:0 subsampling.
I don't understand how this isn't simply ratifying/renaming current TB3-on-Type-C-connectors and this cryptic sentence in the page doesn't help. Anybody know?
This isn't to say that such a renaming might not be a good idea! But it would be nice to know if my current Type C ports with TB and DB support were in fact already "USB 4.0."
Also: speaking of nomenclature: notice that according to the press kit slide show, USB 3.1 Gen 2, USB 3.2
Gen 2x2, and USB4 all have the same "Alternative Branding": "Super Speed+". Madness!
Oh wait that's the issue USB was supposed to solve in the first place....
Don't forget higher power PD cables (it can go up to 100 W!) should have higher gauge wires as well.
I guess technically you could make a USB-C cable that transfers signals optically and has high-gauge wire for power, but that sounds unwieldly and expensive.
You sure about that? The optical cables will still have copper wires for power transfer. Only the high-speed data is carried over optical fiber. Eliminating electrical interference from the data path might even permit higher voltages, and thus thinner, more flexible wires for the same power.
Also it would need to be marked on the cable somewhere.
There's no reason that an unpowered device should have the same connector as a powered one, and quite a few reasons why it shouldn't.
The USB spec is too ambitious and its inevitable that some use cases are mutually incompatible.
So the end result is that no one actually has “all of the above” cables, but rather a collection of cables with different, often invisible constraints. Maybe the move to free licensing will bring TB cables down to USB cable prices, but it remains to be seen.
Edit: Ah, I missed "it will not be exactly Thunderbolt 3 as its functionality will likely be different". That's a clear as mud, then.
Doesn't this open up every USB system (all systems?) to arbitrary, uncontrolled memory access including silently flashing new firmware/microcode to system components?
The option also exists to require approving or whitelisting devices before they are allowed to work over Thunderbolt.
I don't think that matters. If you care about full PCIe through a cable you presumably are charging through a different port.
> As for copper, I understand you don't get 40gbps with a 3m cable
Actually you do with TB: that's the spec and you can find independent verification via a web search.
Much of the IP especially for PHY is licensed from other companies especially in the case of AMD.
AMD only recently has introduced CPUs that can be put into premium laptops and still they don't have a CPU with more than 4 cores and an integrated GPU.
With Intel integrating TB into the CPU directly it also would mean that no one is currently making discrete TB controllers Titan Ridge is the last controller you can buy as a discrete unit and these can technically run on AMD systems but the cost doesn't make it sensible.
With USB4 there will be TB3 compatible IP from multiple 3rd parties which AMD and OEMs could license to integrate it into the SoC directly or into the system itself via PCIe.
Two, I'd be interested to know if AMD bans board manufacturers from including TB or if they just don't do it because their heuristics say users don't care or want it.
I'd imagine now with the rise of USB-C it would be the latter. Before that AMD was stuck on a 5 year old desktop platform that probably couldn't run thunderbolt ports even if board makers wanted to include it.
No, AMD boards already include the required TB hardware. They just can't enable it (yet) for whatever technical or legal reasons. Someone hacked them to enable it (https://www.youtube.com/watch?v=uOlQbP63lDQ)
Intel has just now opened Thunderbolt up (the topic of this article), despite the announcement coming two years ago.
In my opinion, the mess is actually much worse with USB-C, as with earlier connectors I at least had a chance of guessing whether the cable/connection will work by looking at the connectors. With USB-C? No idea.
Most devices are USB, they work and charge over the connector just fine. On laptops, connecting displays work as well. They will either use TB or negotiate for HDMI/DP which is also fine.
The only real outliers are crappy companies which deliberately break protocols like Nintendo on their Switch.
There are USB-C Charge Cable, that does no Data ( To save cost )
>On laptops, connecting displays work as well. They will either use TB or negotiate for HDMI/DP which is also fine.
That is assuming the cable could do TB, which is also not true.
Once you get USB-C to everywhere to everyday normal users, they expect it to plug in and work.
Oh, wait, Ethernet is a thing. I wonder how they just completely missed that when making USB 3. The only thing that should be differentiated between cables is max power delivery rating and data rate.
The OS knows the USB controller hierarchy, knows what the deveice wants, etc.
Cables might be a mess, but it's because low-level software is absolute bovine manure biogas plant on fucking fire level shit when it comes to user experience.
Just a few weeks ago I had to spend about 1.5-2 hours (not exaggerating) trying to get 2 Logitech gamepads working on Win10. (It worked fine at first, but wouldn't survive a reboot. It worked fine in Linux.)
USB type C cables are active (they have silicon in them and handshake themselves with the host rather than simply being wire (possibly plus a resistor) as with old USB cables.)
Cost will come down as volume goes up. Currently the only way to buy a cheaper cable is to buy a noncompliant and possibly dangerous one.
> Life would become so much easier if everything was USB C to USB C
I would agree with the added caveat of "... and if you could tell by inspection what sort of cable you have: speed, power, and protocols supported -- and some way to verify conformance"
I have no doubt overall cost will go down, but I'm curious by how much can we reasonably expect, and if said difference is actually meaningful.
Consider Type-C qualification testing as it stands...it's quite convoluted. Any cost savings from volume economies of scale would be all for nought if testing such high-performance products can't scale proportionally. Given Type-C was specifically designed for the cost sensitive consumer market from the onset and how capability keeps increasing in ways that make one wonder if it's even the same fundamental underlying technology anymore, I wonder how close to the "market noise floor" a qualified device truly is.
In the defense/aerospace industry, it's commonplace for a $0.07 electronic component to have $10 of testing behind it before profit and indirect costs are baked in. Same outcome with MIL-DTL-38999 interconnect, which has literally been around for longer than I've been alive, supplied by multiple competing manufacturers, and leveraged by multiple industries (not just mil/aero)--testing requirements ultimately keep costs high. Although not the mil/aero market, it wouldn't surprise me if Type-C is already close to a similar point of diminishing returns.
Once they become ubiquitous I imagine the usual forces will drive the cost way down so we'll end up with something like the current mix of good and crap cables. Hopefully nobody will advertise 100 W over 30 gauge wire, but probably...!
That they gave you the USB-A to USB-C cable as your go-to cable is a red flag that they are peddling the connector as a marketing bullet point.
A PC and its appendages is a horrendous radio squawk-box and should be kept away from anything RFI-sensitive. The FCC 'must not cause harmful interference' clause is pretty worthless.
"Thunderbolt, uh, does not expose PCIe lanes directly. Thunderbolt is an MPLS-like packet switching network that can encapsulate PCIe TLPs over a PHY and MAC without a spec, chips without documentation, and software with barely any support."
Additionally, the cables are twisted pair, which among other things minimizes emission. And high-speed cables are also supposed to be shielded.
I personally believe that it is real (just go camping where there's no service and you'll feel better) but it's hard to prove without decades of study and money, which won't happen because there's so much money to be made.
There are other reasons why camping makes you feel better. Earthing/grounding gets you in touch with the earth's natural schumann resonance (through the canvas tent) and the fresh air. Also too getting away from distractions.
There's a real study on this, but they don't speculate what the cause it: http://time.com/4656550/camping-sleep-insomnia/
I had to look up what the point of 40Gbps was, and the industry evangelization site https://thunderbolttechnology.net/ explains that it will allow driving one 5k display or two 4k displays, which is not possible with the 20Gbps offered by USB 3.2 or Thunderbolt 2.
This also helps external GPUs, where TB2/USB3.2 is the equivalent of 2.5 PCI-e 3.0 lanes, while TB3/USB3 is 5.
With that, it seems the goal is to be a viable alternative to PCI-e and HDMI, rather than just improve on today's USB3 speeds for existing device classes.
We need another doubling in thunderbolt's speed for it to not have a penalty when using eGPUs
The real thing is that TB3 only has four lanes while Intel's server (and I believe some desktop) chipsets have 16-lane PCIe. Not sure about the mobile chipsets (again, I no longer follow this as closely/ as I used to). So indeed you won't be able to pump data into an eGPU as quickly as you could a high performance internal one. Or perhaps we should add "yet"?
I expect they'll continue the trend of inscrutable capability differences in USB-C ports, cables, and devices. To get the USB4 40GBps mode (effectively TB3) it's still going to be a more expensive active Thunderbolt-style cable, not a vanilla USB cable. Which means that standard markings for those cables will (hopefully?) be part of the USB4 spec.
Unfortunately, knowing USB-IF the major change from the TB3 to USB4 spec will be to drop the current Thunderbolt symbol on the ports and cables and replace it with a new variation of USB-super-hyper-mega-speed symbol to make cables as confusing as possible.
For the most part they don't need colors because all of the ports are the same. One recent exception was that the 2016/2017 MBPs had inconsistent Thunderbolt bandwidth because they didn't have enough PCIe lanes. IIRC left side had full 40Gbps ports but the right side had 20Gbps. The 2018 version has full speed on all four.
EDIT - another potential point of confusion, they don't mark Thunderbolt ports vs USB-C ports. On any given device they're all the same, but they expect you to know that the 12"-mini-macbook is just USB-C while the Pro and Air are all Thunderbolt.
So yeah, it'd be helpful to have some symbols on the laptops. They do label ports on their desktops. I assume this comes down to Ive not wanting labels on his beautiful chunk of aluminum where someone might accidentally see them while using it.
If you plug them in while the laptop is asleep they won't start charging. If you plug them in while the laptop is awake they'll start to charge, and they'll keep going if the computer goes to sleep later.
If you laptop isn't charging itself while plugged in to the wall you have other problems.
It'd be harder to do markings like this on USB-C ports though. They don't have a big visible chunk of plastic in the middle.
Beyond just switching to brand + version number (ie [USB logo] 4, [thunderbolt logo] 3, etc.), I don't know of a great way to really go about this. Otherwise you end up with the super speed championship edition thing you posted, which isn't going to help anyone really.
I think the color one was a great idea on their part in that you didn't have to look too close to see which port you were plugging into, but all it takes is one detractor to deviate from it (say if MSI came out with a gaming laptop with red USB2 ports), and then the whole idea is shot.
Some people are complaining about how this doesn't match expectations, which are that you get the same speed just by using the right shape of cable (and don't care about cable specs). I think that the reality is that this is really pushing the envelope, and 40 Gbit/s isn't that far from typical memory bandwidth (order of magnitude). There are only so many lanes of PCIe to go around, and you're going to end up with some ports getting more than others. We already have problems in Ethernet land with the different cat 5e / cat 6 / cat 7 cables, where bandwidth autonegotiation is mandatory. Most people just want it to work, and if you want your full bandwidth you have to pay attention to the exact specs of every device in the signal path because you can't be sloppy the way you could with 100BASE-T or even 1000BASE-T.
That's a bit of a rant, but I just think that some of the problems with user experience in this area are just down to the fact that this really is a lot of bandwidth to try and push through an external cable, and you end up with the simultaneous goals of having high bandwidth, lots of ports, having all ports be the same, and not charging an arm and a leg for a laptop. Sacrificing bandwidth on a couple ports makes sense to me, in this context.
Seems like the user experience pays, cause I'm not sure how I'm supposed to know which ports have bandwidth.
It's not fast enough for next generation use cases: external displays, eGPUs, or modern SSDs.
Oh, and they will either be short or require power to run the cable itself.
So I don't know if I will be able to tell the difference between 5k and 8k in 27" size, but I know that 4k is not enough.
8K is useful for crisp lines and text on a big close monitor.
Ultimately, I just want every monitor and TV to support a single universal interface, fast enough for the highest resolution/refresh rate. I don't care how it's called or how it works. If "USB 4.0" can be it, great.
It actually worked really well for that use case (coding/ web browsing). It probably wouldn't work at all for anything stressful. Getting the driver set up was a little challenging compared to the usual no effort required.
The website for the product... well frankly not good.
I personally use a usb 2.0 NIC so that I can be packet capturing on multiple switch/vlan/ingress or egress at the same time.
Any supposed USB 2.0-based display connectors (I’ve never seen any, I’m just speculating) would have been some VNC-like contraption involving an emulated frame buffer on the host.
On a tangent, I did see a ~2004 DVI projection-screen tv at a thrift shop once but I couldn't figure out how to get it to output audio. Then it hit me, it uses regular old RCA cables!
I had to add a few things to manage expectations and more accurately reflect reality, but I think we've got it now.