Hacker News new | past | comments | ask | show | jobs | submit login
M1 Thunderbolt ports don’t fully support USB 3.1 Gen 2 (eclecticlight.co)
272 points by ingve on April 18, 2022 | hide | past | favorite | 318 comments



Apple seems to lack the connectivity options competing chips offer, as seen with the limit on external screens on some M1 models and these USB options. It's easy to underestimate the amount of work that external connectors need the chip to do in order to work optimally.

It makes sense, of course, as the M1 evolved from a mobile chip with at most one port (USB C if you're lucky) and not a lot of host traffic. I hope Apple fixes these issues soon, or at least acknowledges them. More likely than not Apple will just ignore the problem and tell their customers to buy Thunderbolt devices instead.


> as seen with the limit on external screens on some M1 models (…) I hope Apple fixes these issues soon

It’s only the base M1 used in the low-end devices that has limited support for external screens. A MBPro with M1 Max can drive 3x6k (DP1.4) plus 1x4k(HDMI2.0). That seems like more than enough for a laptop. Sounds to me like these issues have already been fixed.


How can you call a 2000$ mb pro or a 1700 imac low end devices?


What are you talking about ? Both the 13" MBPro as well as the iMac start at $1299.

The $2k MBPro with M1 Pro can drive 2 external 6k monitors, which seems reasonable. If you're the kind of user who needs 4 external displays, you're probably going to want the M1 Max anyway.


Sorry i used conversion from prices in euros, the base m1 imac is around 1630$ converted here. That's not by any mean cheap or entry level pricing and no it's not ok even for the 2k MBpro to only support 2 monitors at maximum. The kind of user who needs 3 displays doesn't need an m1 max anyway. I'm that user.


Sorry again, it supports up to 3 wich is actually fine for a laptop. The 13 inch mbpro only supports 1. That's not fine. That's not entry level.


Mac Mini is under $1000 and it’s the same chip.


Mac Mini is capped at two external displays. The 13-inch Air and Pro are capped at one external display since it has a built in display.

The 14- and 16-inch Pros support 3+ displays and the built in.


Like you said, my M1 Mac mini “officially” supports two monitors plugged in directly.

It’s nice that DisplayLink connections let an M1 machine have unlimited displays, and that the performance of their driver under Big Sur and above has been fantastic (for me) for the past half year driving 2 extra displays. (Four total, 1 is vertical)

I sometimes add an iPad in Sidecar mode, but that needs to connect before DisplayLink or it will error out, likely since the mini is flabbergasted at the number of monitors it sees attached at that moment. Since my DisplayLink connections are both on the same ThinkPad dock (1 built-in, 1 USB), I just toggle the dock off, connect the iPad with Sidecar, and turn it back on. After a moment everything is working as expected.

BetterTouchTool has a newish feature to capture your current window layout across all open apps and displays and lets you assign a shortcut to put everything back where you intended. I use it after first boot if needed, and keep one for ‘4 monitors’ and one for ‘5 monitors’ (when using Sidecar with iPad)

On-topic: Thankfully the Apple Silicon USB ports are more than fast enough for DisplayLink adapters, which (in my experience) are often using an early version of USB 3.0 aka USB 3.1 Gen 1 aka USB 3.2 Gen 1.


They still don't support DP MST, so you have to use multiple TB ports or use a monitor that supports TB daisy-chaining.


Or a dock. Not disputing the DP MST, just noting there are docks that can drive 2 x 6K.

CalDigit's new dock takes a single TB4 cable to drive, and can do:

    - 2x TB4
    - 1x DP 1.4
    - supporting
    - 2x 5K60 OR
    - 1x 8K60 OR
    - 1x 4K144 OR
    - 1x 1440p240
According to https://dancharblog.wordpress.com/2021/02/05/usb4-tb4-docks/

Or, as CalDigit puts it (see final paragraph):

Single 8K or Dual 6K Monitors

Users on Windows can connect a single monitor up to 8K resolution, and users on macOS can connect up to a single 6K 60Hz monitor.

When connecting dual monitors Windows users can connect up to dual 4K 60Hz monitors, and macOS users on M1 Pro or M1 Max computers can connect up to dual 6K 60Hz monitors. Users on original M1 computers cannot connect dual monitors as original M1 computers do not support dual displays.


A daisy chaining monitor is essentially an in-built dock in most cases.

I work on Displays at Microsoft :)


How does it do 2x 5K60?

The only true 5K monitors I know of are the LG 5k and the Apple Studio Display which are both TB or USB-C monitors, not DP. 5K60 takes ~25gb/sec, multiply times two and the single TB cable as input can’t handle 50gb/sec.


Display stream compression. You can only do it with the ASD as the LG doesn't support DSC.


Ah! Didn’t realize the ASD supported DSC. Ty!


Apple doesn't make low-end devices.


Isn't the Mac M1 mini $699?


Did you just compare the $700 Mini to a $140 ECS Liva or some $200 HP or Asus Chromebox?


Did you just compare a $200 HP or Asus Chromebox to a $0 free 15 year old Dell tower from your local Freegeek?

We can do this all day.


Do you think a 15 year old Dell is going to match the performance of any new machine? "Old" is not the same thing as "low end".


> Do you think a 15 year old Dell is going to match the performance of any new machine?

Well, yes? We aren't that much faster since 2010 if you had true top end then and true low end now.

But since talking Macs and Chromebooks, let pull up those:

- Geekbench MacBook Pro 2010 15": 2.66GHz Intel Core i7 (i7-620M): 457 single, 972 multi

- https://www.ebay.com/itm/304426589491?hash=item46e13d1d33:g:...

And a currently shipping Chromebook, when sorting price low to high:

- HP 4BS38UA HP Chromebook 14 IPS HD (1366x768) Intel Celeron N3350: 288 single, 523 multi

- https://www.amazon.com/HP-Chromebook-1366x768-Bluetooth-14-c...

In this case, the 2010 Mac smokes the Chromebook. To be fair, that model became available in 2018.

So, something current?

Acer Chromebooks w/ Intel Celeron N4020 have pretty bad benchmarks (~320-460 single, ~320-500 multi) but the CPU itself rates better:

- Celeron N4020 benchmark: 427 single, 750 multi

That's in currently shipping gear, like this HP introduced fall 2021:

https://www.amazon.com/HP-Chromebook-Micro-Edge-Portable-14a...

Given these scores are comparing 2010 laptops with 2021 laptops, and the 2010 laptop wins (and can be bought at around the same price!), certainly if comparing a 15 year old workstation with cheapest available machine today, yes, the 15 year old Dell will exceed the performance of at least some new machines.

Note:

- Xeon in Mac Pro in 2010 rated 611 single, 6280 multi

By comparison, in 2008 Mac Pro workstation:

- Intel Xeon E5462 2800 MHz (8 cores) rated 420 single and 2540 multi

Dell workstations at that time supported 5400 series Xeons.


And the M1 Mac Mini gets... guess what? 1744 single core and 7732 multi core.

https://browser.geekbench.com/v5/cpu/search?q=m1+Mac+mini

The Air gets 1744 and 7711.

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...

So no, by your own ranting the M1 mini is not a low-end desktop easily matched by free 15-year-old hardware.


> by your own ranting the M1 mini is not a low-end desktop

Sharing research is ranting? Anyway, post I replied to asked:

> "Do you think a 15 year old Dell is going to match the performance of any new machine?"

Interesting proposition!

So I responded to "any new machine" using the earlier Chromebooks as the baseline.


What's the power draw on your fifteen year old Dell? What's the thermal displacement? What generation of NVMe drives does it support?


ECS Liva has 4GB RAM and 32GB of storage with Windows 10.

It is going to be extremely limited in what it can do.

In that case the iPad at $329 becomes a legitimate competitor and it includes a screen.


The current version of the Liva has a bit more generous specs, at up to 8 GB of RAM and up to 128 GB of internal storage. A Raspberry Pi (if you can find one) or one of the Rockchip SBCs could also be a low-end desktop. I think the point is made, though, that when I mention the true low end of desktop machines the Mac Mini is not even part of the same conversation.

https://www.ecs.com.tw/en/Product/LIVA/LIVA_Q3_Plus/specific...


I pretty sure you’re the only one making that comparison.


https://www.phoronix.com/scan.php?page=article&item=apple-ma...

Does a low end desktop trounce the previous generation Macbook Pro?


If you think either of those computers holds a candle to an M1 Mac mini I don’t know what to tell you.


I don't think you have to hold a candle to the M1 Mac mini to qualify as a low end desktop.


Of course not, because the M1 Mac Mini is not a low end desktop. That's the whole point multiple people have missed here, and I'm glad you're not one of them.


> the M1 Mac Mini is not a low end desktop

It's literally Apple's entry-level desktop machine.

The fact that there are even lower-end options doesn't mean that that it isn't a low-end machine. It is a very large spectrum, from a Raspberry Pi on one end to dual-cpu 2x64-core Threadrippers with gigabytes of RAM on the other. The Mac Mini is certainly on the lower end of this spectrum.


Let's pretend we're Slashdot for a second and use a car analogy. The least expensive part of Alfa Romeo's line or Ferrari's line is not a low-end car. Even Porsche with the Cayenne, much lower-tier than the rest of their stable, is not selling a low-end vehicle.

There's a very large spectrum, yes. Apple's least expensive system is maybe near the bottom of middle tier with no options but can be ordered in a configuration pushing $2000. Their most expensive machines are near the top of the middle tier or the bottom of the top tier. I think you meant to mention terabytes of RAM for the high end rather than gigabytes. I think of things like the Talos II as top-end workstations - something that starts around $9k and can easily be configured to close in on $40k for a single desktop workstation.

None of this suggests that because Apple doesn't sell a less expensive machine that they cater a product to the low end. They actually, truth be told, sell a less expensive machine that the Mini, because the Air includes a screen, keyboard, pointing device, and battery in its price. If you compare an M1 Air to the low end of laptops and other mid-tier laptops, you'll see it's closer to mid-tier than low-end in pretty much every way.

https://secure.raptorcs.com/content/TL2WK2/purchase.html


I very specifically do not think they do. I asked if the parent post, by implying the Mac Mini is an entry level desktop, intended to compare them. Context is fundamental to reading threaded discussions.


I’m not following what you are saying if my interpretation was wrong then, and this comment didn’t really clear it up for me. But I guess we can just agree to disagree?


I think you might be agreeing. They are saying that the M1 Mac Mini is far more powerful than these low end devices, hence "Apple doesn't make low end devices".


Exactly. Thanks for reading along.


The fact that I can't connect more than one monitor to my M1-based iMac is a fucking travesty.


May I ask why? I assume you were aware of this limitation, as Apple was pretty clear in the specs about it, that it only supports one external monitor.

If you pull the trigger on a purchase without reading the specs, then it's pretty much on you, no?


It didn’t become widely known until after the M1s had shipped - I was caught unexpected after someone at work couldn’t connect two to theirs.


Pretty much this. At the moment, I am trying to sort this out for my parents who bought a MacBook Air to replace a 12" MacBook and want to know if it is still possible to use a pair of 23" 1080p HP displays with it.

They don't care about resolution so much, nor does my brother who tested this setup with his 2017 MacBook - bad keyboard and all.

(On that front, last I heard, there was a hidden preference for non-HiDPI displays which has been completely removed, causing some issues.)

So far as I can tell, it is possible to run more than 1 display on the M1 - but it requires using a version of the DisplayLink driver with some extra [hardware][1], which is not exactly straightforward and I question how well this works in practice.

Sonnett is not the only one doing this, but it's the first one I found.

[1]: https://www.sonnettech.com/product/m1-mac-dual-displayport-a...


At least one person at work is using something like that to drive two displays from the M1, so I suspect it can be made to work well enough.


I have two external monitors on my M1 Air. The "main" one is connected via hdmi and runs at 120Hz, and another one is connected via DisplayLink compatible adapter which is connected via usb to the same usb hub as the first monitor. That hub is then connected with two usb type-c connectors+cables to the laptop. It works quite fine. There is a small caveat though. The monitor connected with DisplayLink has a very small input lag, but for light work it works just fine. I would not tolerate it for a main monitor (some people would I guess), but it defintely is much better than not having a monitor.


You mentioned input lag - how noticeable is this effect and is it something you can "hide" by increasing the tracking speed or other settings?

While it might be down to the mediocre Logitech gear he's using, one complaint I have gotten on occasion is how the cursor does not always keep up with his mouse inputs.


The effect to me is noticeable. If I gave it to my SO or some average user I dont' think it would be noticeable. But at the same time, we're all here, so we're not average users. However, on that monitor I have terminal or documentation so for me it works quite fine. Increasing the tracking speed or mouse refresh rates won't help in this case.

I will say though that the system is rock solid for me. I've read complaints about monitors not waking up or similar, but I've never experienced anything like it.

Also the mouse does keep up. The 60Hz refresh rate will keep up with it. There is just a small amount of lag, but that's that. Better than not having additional monitor in my case.


Good to know. He's currently using both screens via DisplayPort with USB-C going from the HP hub back to the computer - might need to consider other options to see if they will work better.


What adaptors/dongles are you using? I could really use a solution that keeps at least one monitor 120hz


The hub is a Ugreen branded one with two USB type C connectors which you can plug directly to your laptop. It also has 2 HDMI, 1 USB type A connectors, USB type C power input, and SD+micro SD card reader. I don't advise plugging this hub directly to your laptop because it heats up and that heats up the laptop's battery which I'm sure doesn't like it. I just keep it on two short USB type C cables so the heat doesn't transfer to the laptop. I've been using it for two years like this.

The DisplayLink compatible adapter is then connected to that hub via USB type A cable. It is made by Startech and has two DisplayPort connectors as outputs. I only use one out of those.


Yes it was? It's literally on the tech specs page and was widely reported:

> Video Support

> Simultaneously supports full native resolution on the built-in display at millions of colors and:

> One external display with up to 6K resolution at 60Hz

https://www.apple.com/macbook-air/specs/


> It didn’t become widely known until after the M1s had shipped

This is nonsense.

For every product Apple sells they list the technical specs on the website at the same time it is on sale.

And it clearly states for the iMac that it supports a single display.


I get what you are saying, but most people don’t question if a computer can support 2 external monitors in 2021/2022, so it’s very unlikely they would glance at that in the specs.

Hell my 2016 MBPro can push at least 4 external monitors. Maybe more I honestly don’t know, it’s not like I ever needed that information.

It’s like a more extreme version of buying a phone and finding out it doesn’t come with a charger. Sure, it’s probably somewhere on the box, but who wouldn’t be surprised?


>but most people don’t question if a computer can support 2 external monitors in 2021/2022

And yet Apple has shipped just that. Apple is notorious for removing features that have been standard for years so they can sell you the "solution" at an extra cost (a.k.a the Apple tax), so, if you just impulsively buy the latest 'shiny' without reeding the spec sheet, then sorry, but that's 100% on you.

It's the basics of leveraging market segmentation, a tactic which Apple is perfect at. They determined that whoever buys a MacBook Air probably has no need for more than one external monitor, so they removed that feature that was a given on Intel based systems to save cost, and assumed that people who run multi monitor setups are more likely to be businesses and professionals who can pony up the extra cash for a more expensive M1 Pro/Max model. It's how Apple, and others, squeezes more money out of their customer base, and now with their own silicon, they can now charge you extra for what was standard on basic Intel chips 8 years ago. Enjoy ;)

I also don't like this practice, which is why I don't own any Apple HW, but I can't blame Apple here when it's their customer base at fault for buying their products without reading the spec sheet where Apple was transparent about the limitations of the basic M1 chip.


I guess we just aren’t going to agree here. While yes, you are correct people should check that stuff in general, again I can’t blame people for not doing so. It’s just such a basic thing we’ve assumed computers can do for - as you pointed out - roughly a decade.


> but most people don’t question if a computer can support 2 external monitors in

But we are talking about the iMac here i.e. it already is an external monitor.

Is there much of a market for 3 displays ?


As a video editor most definitely. And yes there are more professional machines but frankly their low to mid-tier models have a ton of firepower and work great for me these days. I used to drop $2500-$3000 on my Macs to edit but I can do great with $1350-$1800 now, especially since I don't need After Effects/Motion much


But the comment says "M1-based iMac". The M1 iMac was released 6 months after M1 was first released with the Mac Mini, Macbook Air/Pro. I can understand if you bought the original M1 Air/Pro and were surprised it doesn't support more than 1 external display since it was indeed surprising behavior. But 6 months after the release of the chip is plenty of time to be informed on what you're buying.


> Simultaneously supports full native resolution on the built-in display at millions of colors and: One external display with up to 6K resolution at 60Hz

Source: https://web.archive.org/web/20201110215047/https://www.apple...

Literally stated on the tech specs day of launch. Yes, Apple historically under promises[0] and over delivers, but in this case they stated exactly what it supported.

[0] Apple has never claimed support for more than 3x monitors to my knowledge on a laptop, but using an Intel MBP I was able to drive nine, all GPU backed without using Displaylink.


I've got to ask: What on earth did you need 9 monitors for!?


Was just testing what was possible, didn’t actually use it in normal daily use.


Presumably because similarly priced Intel Macs could connect to 2 or 3 displays with it's pathetically underpowered iGPU?


Not really, every single Mac Mini I can think of in the past decade has been able too support more than one display. Honestly, it's the kind of thing I just assume an expensive machine (yes $700 is expensive) would support.


Yeah I don’t get how so many people are going “you should’ve read the specs!”

Who confirms features that virtually everyone computer has had for a decade?


I'm an absolute fiend for screen real estate, and I'm not excusing this. It's a reason I didn't jump onto the M1 bandwagon myself when the first models came out.

But I don't see it as nearly reaching the level of being a "fucking travesty", in an era of affordable giant 4K monitors.

At any rate, I'm certainly glad newer models fix this.


The core reason is that I want both an extra screen AND to be able to draw on my Cintiq drawing screen. On an iMac, I can't. It's stupid.


Why can't you? I am typing this from an M1-MPB with two monitors. I just have 1 through thunderbolt and 1 through the hdmi.


You clearly have an M1 Pro or M1 Max MPB since you have an HDMI port. The base M1 models support the built-in display and 1 external display except the M1 Mini that's allowed 1 monitor over HDMI and 1 external display over TB. The Pro, Max and Ultra M1 models allow more displays.


I have a "MacBook Pro (13-inch, M1, 2020)" and I clearly can't have 2 external monitors. I tried everything.


That model has built-in support for a single external display in addition to the built-in display. If you want a second external display, you have to use third-party DisplayLink technology, which may be good enough depending upon your use case, but is inferior to built-in support.


Parent poster could also connect additional displays:

- via Sidecar on an iPad - via Airplay on a display of their choosing

Just mentioning workarounds, not excusing things.


I had a m1 mbp and have used three external monitors from day one. Used a startech 3x4k hub for that. It uses display link - which is sw-based rendering. Worked like a charm for dev work. You likely couldnt tell that from directly connected monitors unless you wanted to push rapidly changing content (like scrolling through logs at 60 screen-heights per second) - then compression would be indeed visible. For work, videos and occasional session of gzdoom it was a-ok.


Are you able to push 120hz on both external monitors? I’m guessing not. Looked at 2 startech DP dongles and all permutations seem to tap out at 60hz. Really hoping I’m wrong, would love a 120hz solution if possible.


No, 60Hz max with the startech.


Damn. That's a bummer. I want a 3rd screen but I am pretty spoiled by 120hz now. It's so easy on the eyes.


I find it amusing that every single one of our high-speed standards (ethernet, HDMI, PCIE, displayport, USB) are basically just the same high-speed differential pairs under the hood with different framing.

I don't know exactly what we can do about this situation, but at some point it probably makes sense for all of them to converge into a multi-lane, differential-pair standard (or just support nesting each other arbitrarily).


IIRC Thunderbolt essentially is "PCIE but in a cable"


Thunderbolt actually includes a lot of other stuff that isn't just "pcie" in a cable - like USB and DisplayPort streams in addition (at the same time) to the PCIe links.

There is also U.2 for shorter links. There are relatively trivial risers (not sure if it needs a chip or not, but either way they're under $30) that let you turn regular PCIe x4 or M.2 x4 slots into U.2 cables. It's intended for the enterprise space but if you just want "pcie in a cable" that's what it does.

ExpressCard actually also was a PCIe 1.1x1 or 2.0x1 link as well, and people used risers to break out to external GPUs just as people do with thunderbolt. There are various eGPU setups "GDC Beast" is one I remember?) that are still available for ye olde thinkpad if you want more/better display connectors.

Also obviously miners use those shitty USB-based riser cables (they aren't USB, but use USB wiring?) with x1 connectivity.

I dunno whether any of those are "legal" according to the PCIe signalling requirements but in practice you can get short runs pretty reliably especially at lower speeds, and good PCIe riser cables can even do 3.0x16 or 4.0x16 capability.


I've heard it's more like a "complex proprietary MPLS style network layer that can carry PCIe and DP and whatnot".

They wouldn't be able to do daisy chaining if it was bare PCIe, but it does sound horrible.


That's my understanding as well.


Maybe we can call it a Universal Serial Bus


I like the idea! Let's start with v1.0, and then just let our PR team decide on the next versions


Nah, we cant go with 1.0. Bad for adoption. Since we can push 10Gbps easily, and maybe up to 40Gbps if we try hard, lets call the first version usb "ultraspeed" and let the marketing guys work with that!


How many parallel differential pairs can you have before it's no longer serial? Hah.


You can have arbitrarily many, so long as the serial lanes are independent of each other. PCIe can have lots, for example. Because they are independent, they won't suffer from clockskew problems that made parallel signaling impractical at high speeds and it's often not required to connect all of the wires (e.g. an x16 card hanging out of an x8 or smaller slot) .


That's just kicking the can down the road to even less efficient software that must now manage synchronizing all the lanes.


PCIe does this in hardware


Did you just call for a Joint Strike Fighter/F-35 of data bus standards?


USB went from being the universal bus, to just being the universal connector. There are never any guarantees that your device (or cable!) actually supports the operation you’re trying


I'm fine with devices offering different capabilities. But the cables should be marked clearly... I have one of the chinese drawing tablets, which has a convoluted "to USB-C" adapter. Apparently you can also drive it with USB-C directly, however only a manufacturer-provided cable (supershort) fits the port. Noone know what cable you can use alternately...


> But the cables should be marked clearly...

How about by using different ports? Imagine if instead of playing whack a mole with 4 usb-c ports on my laptop, 2 usb c ports on my monitor and a bag of cables, I could plug a cable that fits into the port it fits into?


> How about by using different ports?

That would just be worse: the port being backwards compatible is a feature, being able to plug a 5Gbps device into a 20G-capable-port is a good thing, so is being able to plug a self-powered device into a PD-capable port.

The issue is the difficulty of matching the cable with the device (and port) to ensure the upper bounds match. Obviously you could get the highest-rated cables but there are tradeoffs in length and flexibility, using a 2x2 PD 3.1 cable to plug in a keyboard is just a waste of money and simply results in lower convenience (as the 2x2 PD 3.1 cable will be shorter and more rigid than a USB 2.0 low-power cable).

That could be an argument for "different ports" being backwards-compatible "poka-yoke" (mistake-proofing) ports, but historically these have had really shitty design (e.g. m.2 tabs are plain misdesigned) and they would have required significantly increasing the size of the plugs to make room for the hardware features.


> That would just be worse

I disagree. In my cable/charger box I have 4 USB-c chargers with 4 USB-c cables, and they're not interchangeable for all intents and purposes. I've ended up sharpieing what device they belong to rather than reading the specs printed on the back of the charger, because otherwise I end up trickle charging devices (and thankfully I don't use a USB c dock, or I would be in all sorts of hurt.

> The issue is the difficulty of matching the cable with the device (and port) to ensure the upper bounds match.

If that were the only problem then sure I would agree that things are worse, however in the real world manufacturers ship both devices and cables that are missing features, underperform, and are labelled as "USB 3"


A legal mandate that they had to be different colours where be the best way I think.


Works well for USB3.0 vs. 2.0/1.0 that’s for sure. Legally mandated I’m not sure, but yeah, colors for sure.


Yeah, I was imagining more like trademark law or something rather anything usb-specific.


I'd like an easy way to query, diagnose USB cords.

They now contain their own chips. Right? So there's gotta be some kind of POST. Right?

If so, I should be able to ask the cord what it can do, if it thinks its broken, etc.


Huion? I've heard sanded down (so they fit in the port) USB C cables that support DP output works.


yes. And unfortunately it seems that the "port" extenders you can buy don't support DP and after I bought a seemingly DP-capable cable of AMZ and sanded it down (it didn't work) I stopped.


For what it's worth, my Huion tablet (non-screen) had what seemed to be a standard USB-C cable on both ends. Good to know if you ever wanted to try a different Chinese brand.


Some crap devices from China only accept specific voltage for charging (with bundled charger) without PD negotiation, despite they use USB Type-C port.


There never was a guarantee that a USB A or micro B port supported USB 2.0 either. There are still tons of keyboards and peripherals out there that run on USB 1.1 in your standard USB 2.0 ports.

The moment they added backwards compatibility, the USB confusion started. That's not a bad thing, the real problems are manufacturers being vague about what they do and do not support and the USB people renaming everything every other year.


> (or cable!)

Recently, I tried to find a Displayport-compatible cable and went through three (admittedly on the cheap side) cables that claimed to support it but didn't - in the end, I just bought a certified Thunderbolt 3 cable to avoid any ambiguity in whether they included all the wires or not


Wires is only part of them problem.

A thunderbolt 3 cable has two microcontrollers in the cable itself!


Yep. I use two monitors, both on displayport. Bought like 4 cables and one adapter to get a working combination :)


What cables/adaptors are you using currently? Looking for good solutions myself.


They're mostly noname brands, the "top" brands aren't available here probably because they're expensive.


USB-C seems to be hit & miss in general, it not only depends on the connector but also the cable and what's on the other side. I use my M1 with a Lenovo USB-C dock an theoretially it's capable of piping 4k at 60 Hz to my monitor, but in order for that to work I'd need to buy cables that cost half as much as the dock, so I just plug the HDMI cable directly into the Macbook (and am thankful they included an HDMI port at all). Also, you have to know which USB-C port to plug stuff into as they don't seem to be equal in terms of functionality, or maybe it's just the cable? In any case they're not marked differently. But yeah, first world problems.


> I use my M1 with a Lenovo USB-C dock an theoretially it's capable of piping 4k at 60 Hz to my monitor, but in order for that to work I'd need to buy cables that cost half as much as the dock…

In case it's helpful, have you looked at Monoprice? They sell great USB-C 3.2 Gen 2 cables for $10 (0.5m).

https://www.monoprice.com/product?p_id=27923


I wonder whether it's a software or hardware problem? Let's say you have the thunderbolt connector, shouldn't all those supported standards (esp. the usb) then just be software problems? This does not make it easier for apples side, it might still be a huge undertaking, but then it would be upgradable in principle.


I'm wondering the same thing. I guess someone would have to run the same tests on a Mac running Asahi Linux and compare results.

Any ideas of what software to use to measure this on Linux?


The article makes it pretty clear that they think this is a hardware problem.


Hmm where do you get that? I don't get the impression from reading the article.


I have two USB 3.2 drives that don’t work at all on my M1, but work fine on my Intel Mac. On the M1 they show up in Disk Inspector but any attempt to read from them, or even eject them, gets the bouncy ball.


Is this perhaps just a file system formatting issue? Disk Utility (there is no Disk Inspector) should tell you how they have been formatted. You mentioned reading from them. Can you erase them? Tried different cables?


Again: works fine on Intel Mac.


Sorry, missed that. Looks like lots of others are having issues, especially with the new Studio.

https://www.macintouch.com/post/23506/m1-mac-port-problems/


The block size used isn't specified, and that has a significant impact on performance. The exact model drives used aren't specified, but elsewhere in the comments someone references the Crucial X8. With 32KB block sizes, the transfer speed is ~520 MB/s. https://www.anandtech.com/show/16186/crucial-portable-ssd-x6...


The author is a dev who publishes multiple high-end storage applications. I'm not sure why you though questioning his judgement over something so basic would be a good idea.


Well it isn’t anything physical, because connecting an m1 Mac to another Thunderbolt device delivers the advertised speeds.


The inability of the USB standards body to stick to a versioning scheme beggars belief: what in the world could "USB 3.1 Gen 2" possibly mean? Versions are free, why not simply name it "USB 3.2" and make the next one 3.3?

"Gen 2" strikes me as the "Essay FINAL (final version 2).docx" of standards revisions.


> what in the world could "USB 3.1 Gen 2" possibly mean? Versions are free, why not simply name it "USB 3.2" and make the next one 3.3?

USB 3.1 Gen 1 is the straight rebranding of USB 3.0, USB 3.1 Gen 2 is the "original" 3.1, which is 3.0 updated to a 10Gbps signalling rate (from 5), and a more efficient encoding (128/132, versus 8/10).

But wait, there's worse!

USB 3.2 Gen 1 is the same thing as USB 3.1 Gen 1

USB 3.2 Gen 2 is the same thing as USB 3.1 Gen 2

USB 3.2 Gen 1x2 is 2 lanes of Gen 1 (both G1 and G2 are single-lane), so 2x5Gbps at 8/10

USB 3.2 Gen 2x2 is 2 lanes of Gen 2, so 2x10Gbps at 128/132.

Don't ask me why you'd want to use 1x2, I've no idea, and apparently USB-IF doesn't either (they don't even recommend a marketing name for it).


> USB 3.1 Gen 1 is the straight rebranding of USB 3.0

Why? To solve what exactly? In a sane world, it'd be major.minor, with nothing appended at the end.

Allowing marketers to take over everything was such a terrible mistake.

At this point I myself just don't differentiate USB versions beyond 3.0. 2.0 is the slow one, 3.0 is the fast one, anything newer is "so fast I won't ever saturate this much bandwidth anyway". Thunderbolt also falls into the last category.


IIRC, USB 3.1 was an update to the spec to allow USB Type C connectors. USB 3.0 specified the old type-a connector on one end and one of the weird oversized type-b connectors on the other end.

Maybe not relevant as a consumer to have a difference in the spec version for that, but if you’re a manufacturer trying to implement it, it’s worth a bump to the minor version number.


I have some Android phones that have a USB-C port but only support 2.0 speeds (as per the macOS system info USB tree thing). Which version is that then? I've always assumed that the physical layer of the protocol and the connectors themselves were separate specs and you could have USB protocol running over any kind of arbitrarily shaped connector. And no protocol over a USB(-A) connector, because those are de-facto low-power DC outlets now, among other things.


The USB-C connector has a dedicated legacy USB 2 pair that is always available. This is the same as an upgraded (usually blue) USB-A connector with the addition of mirrored pins.


It's 3.1 or newer, full speed device.

Why normal people should care about the spec version is weird to me.

The spec does specify the connectors for using the USB logo and claiming compliance. But if you don't care about compliance then use whatever connector you wish!


Note that in my experience MacOS (M1) is pretty bad with USB devices, I have a lot of things that just don't work properly on the Mac but work on Windows and Linux (ebook reader and phones for example).


Very likely the Android device doesn't advertise any particular USB spec. If anything it might have a logo for the minimum spec it supports. You can put USB controllers in a device and not worry about any USB-IF if you don't use any USB branding on the box, IIRC any licenses owed are covered in the price of the ICs.

The logical connection in USB doesn't care about the physical connection. Your phone could just have some soldered leads for all the protocol cares.


I think that version is called "we don't give a fuck about what the USB-IF thinks"


I'm pretty sure 2.0 C cables are entirely legal by the standard.

Unlike charge-only cables...


3.1 came out a year before C.


Tricking people into buying new hardware while incurring zero marginal cost of development, sales, distribution? Sounds like a winner. /s


To save tech nerds who know the difference from unnecessary expenses. Let others spend more money into the economy. Win for us. /smirk


> Allowing marketers to take over everything was such a terrible mistake.

How else it could possibly play out? It's nice to think that NIST or a similar U.S. government agency could standardize everything, but outside of industries that are most government funded (defense, etc.) or deeply connected to government funding (medical research) the forces of marketing are going to take over eventually and at scale. It's not like the government can or will force consumer electronics marketing to be honest, at least in the U.S. and Europe.

I was a little disappointed, for example, to learn during the HDMI 2.1a fiasco that HDMI (the organization/legal entity responsible for the standards), VESA, etc. are loosely organized groups from the companies in the industries. So if they want to rename HDMI 2.0 to 2.1a to make their old products look better, nothing's really stopping them.


> Allowing marketers to take over everything was such a terrible mistake.

I think the USB naming is the result of not using marketers.


> I think the USB naming is the result of not using marketers.

No, it's specifically to give marketers what they want, which is the ability to advertise support for the latest version of the spec without actually having to improve the product.

It's not USB-IF's marketers that want this, it's the vendors who pay USB-IF whose marketers want this.


Every backwards compatible spec would have this "problem".

Either a new version of a spec says that all existing implementations are out of spec, or it add new optional features and existing implementations are that don't support them are still in spec. There's really no other option.


This is not an explanation of why the naming is a disaster. USB 1 devices work on USB 2 devices without there needing to be some insane renaming. Ditto USB 2 devices on USB 3 devices.

This naming was an entirely unforced error, and should be called out as such at every opportunity to keep it from happening again.

Edit: if you want another example of a spec with good names, checkout WiFi 6. They realized that no normal person could reason about whether 11n or 11ac was better, so they changed the naming scheme to make it obvious what was going on.


> Edit: if you want another example of a spec with good names, checkout WiFi 6. They realized that no normal person could reason about whether 11n or 11ac was better, so they changed the naming scheme to make it obvious what was going on.

Moving to WiFi 6 was a great idea!

Which they immediately made a mess of.

What version of WiFi has 6GHz support? 6e of course.

How about uplink MU-MIMO? Oh that's WiFi 6 Wave 2.


Because being able to differentiate uniquely different offerings by different designations is only of value to marketers?

There are differences that warranted the incrementing of the numbering - mainly the vastly superior USB-C connector over the gawd awful USB-A, Micro, Mini and that other one that I can never remember the name of.


> Because being able to differentiate uniquely different offerings by different designations is only of value to marketers?

What are you talking about? The whole point we're discussing here is that thanks to marketers we have the opposite of that where the exact same product has had three different marketing versions associated with it over the last few years without any actual difference.

USB naming/numbering was perfectly reasonable up to 3.0. Higher speed revisions got a new number and lower speed revisions kept their old one.

Then with 3.1 this all changed. Suddenly the same port that was a USB 3.0 port yesterday is now a USB 3.1 Gen1 port and the new 10 gigabit speed is USB 3.1 Gen2.

Then 3.2 comes along and suddenly those ports are now 3.2 Gen1 and 3.2 Gen2, with the new additions of 3.2 Gen1x2 and 3.2 Gen2x2 modes.

What would have been wrong with continuing the same way they had already been doing things and just calling the SuperSpeed+ mode "USB 3.1" while leaving the old ports referred to as USB 3.0, then when dual lane came around that is "USB 3.2"? It's straightforward, easy to understand, and doesn't have any chance of confusing someone in to thinking "well this thing has USB 3.1 Gen1 while this other one only has USB 3.0, I guess the first one's better".

> There are differences that warranted the incrementing of the numbering - mainly the vastly superior USB-C connector over the gawd awful USB-A, Micro, Mini and that other one that I can never remember the name of.

The type C connector is independent of the versioned standards. The USB 3.1 spec was released in July 2013 while the Type C connector spec didn't come out until August 2014. USB 3.1's 10gbit SuperSpeed+ mode is allowed and supported on the same USB 3.x Type A and Micro-B connectors as USB 3.0 SuperSpeed uses. USB 3.2's dual link mode of course requires a Type C connector at both ends.

USB 4 actually requires a Type C connector, while it supports single lane operation for setup and fallback it requires all devices support dual lane. The 10 gigabit mode also uses different encoding than SuperSpeed+ so it's not just another rename.


I think the USB naming is the result of not using marketers.

This seems closer to the truth to me.

Being better at naming things is part of the reason that Apple became so successful.

"What do you want for your birthday, little Timmy? Do you want Apple AirPods, or Sony WF1000XM4s?"


AirPods, Pro, Gen 1, 2 or 3? iPad, Air, Mini, or Pro, which generation? And for used ones, where does it say on those products which model it is?

Apple also sucks at naming and labeling things.


TBH those names are the good ones.

Which Macbook Air do you have? Today, sure, you can say "M1". But earlier models did not have official names requiring Apple themselves to describe them in documentation as something like "MacBook Pro (13-inch, 2017, Two Thunderbolt 3 ports)"* or worse, resort to the internal part number ("MPXQ2xx/A, MPXR2xx/A, MPXT2xx/A, MPXU2xx/A" in that case).

Not that "Sony WF1000XM4s" is anything to write home about either.


Yes, I'll admit I didn't use Apple laptop naming as an example because I simply can't remember how it works.

Related, I have two old Mac Minis here and I can't tell which one is the several years newer one without finding a tiny model number printed on the bottom of both, looking them up, and comparing the specs.

I for one have no idea what ipads are better than what other ipads. Is Air better or worse than regular? How does that relate to Mini, in Apple laptops Air was the smaller laptops, but now there's also Mini?

At least companies leaning heavily into number-based schemes tend to make models with larger numbers the fancier more expensive ones. Acme Frobnicator 6400 is generally a lot better than Acme Frobnicator 3000.


The spec got a version bump from 3.0 to 3.1 and the new version of the spec added an additional transfer mode called "Gen 2".

From that point of view it makes a lot of sense. You need to differentiate the spec version from the features talked about in the spec.

From a marketing standpoint, it's terrible as most people need a simpler name and are incorrectly choosing the spec version. Marketing labels should just be "USB Gen 1" and "USB Gen 2". If you see "Gen 2" you know the spec version was at least 3.1, though that shouldn't even matter.


> You need to differentiate the spec version from the features talked about in the spec.

Maybe make a separate spec version for each transfer mode and require backwards compatibility. It's easy for the average person to understand "the higher the number, the newer the standard, the faster it goes". It's not as easy when there are 3 numbers, 2 of which are optional. If you have to use 3 numbers, you could at least make it look like a version number, so "3.1 Gen 2" would become simply 3.1.2. These are easy to understand too because everyone has at some point installed software updates.


> From that point of view it makes a lot of sense

From the point of view of the millions of people who have to use the damn things and probably aren't even aware the name comes from a spec doc somewhere, or even what a spec is, it makes absolutely no sense and the fact that the people who wrote the spec didn't make some effort to account for this means they failed badly.


Part of the reason I assume is they have both newer and more expensive editions. A port could be using the latest spec and features but not implanting the full feature set of the more expensive ports.

Products like intel cpus have tiers like i5 and generations. This is kind of like what USB did but so much worse.


It is a spec, not software. If usb 3.1 includes all the details on how to build and talk to devices that 3.0 did. If a device built for 3.0 suddenly wasn’t 3.1 compliant it would be a huge engineering failure on behalf of the USB-IF.

You are likely thinking of named features and of conformance marks, things like Superspeed 10 Gbps and the like. People outside hardware developers aren’t expected to know USB 3.O vs 3.1, and in fact should demand to not have to know.


If most of the USB 3.1 spec is just USB 3.0, and you only need to refer to the USB 3.0 parts to build your device, then why not just build your device solely according to the USB 3.0 spec, and call it a USB 3.0 device?

Compare/contrast: the 802.11 specification series. Each successive specification only lays out the unique new modes; with the expectation that devices support multiple specifications, where each specification lays out its own supported modes. So if you e.g. build a device that can only do 802.11n things, and not 802.11ac things, then you’ve built an 802.11n device. But 802.11ac peers — which are really “802.11ac + 802.11n” peers — will still be able to talk to it in 802.11n mode — but they do this according to the 802.11n spec that they also support; not according to the 802.11ac spec.

> People outside hardware developers aren’t expected to know USB 3.O vs 3.1

Why not? How else should you know whether your device can go at its full advertised speed?


> How else should you know whether your device can go at its full advertised speed?

Just say what speed it supports using which technology. SuperSpeed at X Gbps, SuperSpeed+ at Y Gbps, high speed at 480 Mbps. You can only use a speed both support.


But in this case, say that a host supports only 3.2 Gen 2(x1), and a client device supports only 3.2 Gen 1x2. Both would advertise a relatively-higher speed, but it wouldn't just be that you'd get the lower of the two speeds; rather, you'd get a speed lower than either claim. And IMHO this isn't some weird exception; standards are often like this. This is is the reason people care to know exactly which subprotocols their devices are capable of negotiating, rather than just knowing the generation of support.


But that's a lot of words and they already knew people don't say those terms. 2, 3, 3.1, are much easier to say, and there was no consumer-benefitting reason to undermine those terms.


I told my wife 2.0 is where you stick mouse and 3.0 pendrive


Why do you think the marketers screwed this up and not technical people? Doesn't seem like the style of screw-up they usually make.


> Why? To solve what exactly? In a sane world, it'd be major.minor, with nothing appended at the end.

It's always important to get acquainted with a topic before succumbing to the desire to mindlessly criticize in ignorance.

A quick search in Wikipedia shows that USB 3.1 specifies a brand new transfer rate, dubbed SuperSpeed+ transfer mode, "(...)which can transfer data at up to 10 Gbit/s over the existing USB3-type-A and USB-C connectors (1200 MB/s after encoding overhead, more than twice the rate of USB 3.0)".

This is a different transfer mode than the SuperSpeed transfer rate specified in USB 3.0.

To allow implementations to support both transfer speeds, the implementations that supported SuperSpeed transfer rates were dubbed USB 3.1 Gen1, while the implementations that supported SuperSpeed+ transfer rates were dubbed USB 3.1 Gen2.

https://en.wikipedia.org/wiki/USB_3.0

To me that's a very convenient, and customer-centric way of putting together a standard. So there's a minor addition to a major release. Is that left to an annex? No. Do we fork standards? No. We just release a backwards-compatible v3.1 standard that in practice deprecates v3.0 and thus allows the whole industry to avoid piling up the list of current standards that we care about.


The problem is that the version of the standard is conflated with the version of the port. Here is a better system:

1) The standard has a version number. I would have suggested semver, except that USB will always be backwards compatible within the same physical port, so there's no need for separate major/minor versions. We can just use an incrementing number 1,2,3 etc.

2) The standard gives each physical port layout a name, eg. A, B, C. Port layouts can be added/removed in new versions of the standard.

3) The standard specifies and names various transfer rates, eg. T1, T2, etc with new names being added in new versions of the standard. (eg. version 1 defines speed T1, version 2 defines speeds T1, T2. etc.)

4) Manufacturers are not allowed to use the version of the USB specification in their marketing material at all. A port is not "USB 3.1", it's "USB C T2".

The version of the spec only serves to confuse customers because the whole point is that versions are backwards compatible. The only thing the customer cares about is the physical port layout and what features/transfer rate are supported over that port.

So the marketing names would be:

USB A T1, USB A T2

USB C T1, USB C T2, USB C T3

As a customer, I can easily see that I can't plug a "USB A T1" into a "USB C T1" because they are different physical ports. I can also see that "USB C T2" is faster than "USB C T1".

Admittedly thing are slightly more complicated because the transfer rate is not the only "feature", we also have to consider what kinds of data can be transferred. We can extend the full marketing name to:

USB C T1 (audio)

USB C T1 (audio,video)

etc.

Obviously this is too long to have on the port itself, so we can stick with just the port type and transfer speed (eg. USB C T2) as the short name.


I think you just recreated what the USB Forum is already doing?

The standard as a whole has a version number that increments with the entire standard document.

Port layouts are named A, B, C (and deprecated Micro-A, Micro-B, and Mini-A).

The transfer rates are now named: Gen 1, Gen 2, Gen 1x2, and Gen 2x2. (These names are stupid, but they are names. Gen 1 is all the transport capabilities from USB 1.0 to USB 3.0 {including the now "classic" "superspeed"} and Gen 2 is new starting with USB 3.1 and the even worse named Gen 1x2 and Gen 2x2 are new starting with USB 3.2).

> Obviously this is too long to have on the port itself, so we can stick with just the port type and transfer speed (eg. USB C T2) as the short name.

Marking the port type is redundant because USB has been good about giving every port type a very different physical silhouette. USB A ports look nothing like USB B ports look nothing like USB C ports. (Arguably there are legitimate complaints that USB Micro-A and USB Mini-A had some visual at a glance issues in practice, but you couldn't insert the wrong cable into the wrong port.)

So yeah, that just leaves finding a better way to mark the ports (and cables!) with the transfer speed. "Gen 1", "Gen 2", "Gen 1x1", and "Gen 1x2" all take up a lot of space and maybe aren't the friendliest names to mark on/near ports/cables, but are in theory potentially the only bit of information that ports need to be marked that cannot be assumed by physical port shape. (ETA: Which the USB-IF Marketing names like SuperSpeed 40 and logos like an arc around the number 40 next to the USB symbol are designed to do, though the fact that people don't recognize them and they don't seem common enough in practice that people know they exist is a marketing failure more than a technical failure of the standards.)


> I think you just recreated what the USB Forum is already doing?

If they were doing this then there wouldn't be any ports described as "USB 3.1" (see point 4). The version of the standard is not just irrelevant but actively misleading to use in marketing material.

> Marking the port type is redundant because USB has been good about giving every port type a very different physical silhouette.

On the port itself, sure, but if I'm buying a laptop online then it's pretty important what type of port it has, so saying "USB C T2" is a lot more useful than just "USB T2". How would I even know what silhuette the port has?

The two most important pieces of information are the type of port and the transfer speed. There are only a small number of possibilities for each, and the latter is a purely numeric value, so a letter/number pair is sufficient.


> If they were doing this then there wouldn't be any ports described as "USB 3.1" (see point 4). The version of the standard is not just irrelevant but actively misleading to use in marketing material.

That gets to the edit I made at the end. The USB-IF Marketing group has never suggested marketing ports/cables as "USB 3.1". They've always preferred the "SuperSpeed {Bandwidth}" branding over "USB {SpecNumber}". There's a chart here: https://en.wikipedia.org/wiki/USB4#USB_3.x_.E2.80.93_4.x_dat...

(Admittedly, they are mixing messages by using "USB4 SuperSpeed {Bandwidth}" as marketing names for "SuperSpeed 20" and "SuperSpeed 40".)

But the fact that just about no one uses the USB-IF Marketing Names and instead reverts to easily confused "USB {SpecNumber}" branding is an interesting marketing failure by USB. (Not necessarily a technical failure of their specs.)

> On the port itself, sure, but if I'm buying a laptop online then it's pretty important what type of port it has, so saying "USB C T2" is a lot more useful than just "USB T2". How would I even know what silhuette the port has?

That's a slight goal post move from your previous comment about what to mark Ports/Cables. Sure, if you need to mark online materials you need to include port types. But it's still redundant on a physical port or cable to mark the port type when you are staring right at the port type.


Adding a new transfer rate seems like a reasonable place to bump the minor version number of a protocol. After reading all of that I'm even more convinced that it should have just been USB 3.2.


> Adding a new transfer rate seems like a reasonable place to bump the minor version number of a protocol. After reading all of that I'm even more convinced that it should have just been USB 3.2.

I'm not sure you read any of that. I mean, they bumped the standard version to 3.1 from 3.0 after adding a new transfer rate.

Also, USB 3.2 was bumped up from 3.1 after adding two new data transfer modes.

I also add that the naming scheme is quite obvious once you start to think about it.

* USB 3.0 only supports the one SuperSpeed data transfer mode.

* USB3.1 was released, and it specifies two distinct data transfer modes: the legacy Gen1 mode and the novel Gen2 mode.

* USB3.2 is released, and it supports four transfer modes: the legacy Gen1 and Gen2 modes from USB3.1, and two new SuperSpeed+ modes which are 2x and 4x faster than Gen2.


But then why rename 3.0 to 3.1 then 3.2? And now with USB4, everything is USB4. If I remember the upcoming standard correctly, your cheap USB-C cable only doing 420 Mb/s (USB 2 speeds) is now USB4! For free!

If a USB 3.0 cable can suddenly become USB 3.1 (or 3.2) overnight, then what's the point of versions? And what's with "Gen #" at the end? Because a consumer is easily going to be able to see that a USB 3.2 Gen 2x2 is better than a USB 3.2 Gen 1 cable? Or maybe the sellers will just not advertise the "Gen #" portion? According to the Q&A section of this Samsung external drive[0], the difference between this and a USB 3.1 drive is nothing but the model number.

</rant>

The USB Consortium has been overrun by marketing that thinks that making things more confusing (read: tricking) is better for the consumer.

[0]: https://www.amazon.com/SanDisk-256GB-Extreme-Solid-State/dp/...


> But then why rename 3.0 to 3.1 then 3.2?

I honestly have no idea what you're trying to ask.

Keep in mind that:

* USB3.0 was released in 2008.

* USB3.1 was released in 2013.

* USB3.2 was released in 2017.

Each standard is standalone, and specifies all of its transfer modes. I wouldn't be surprised if each of these specs also included fixes, and thus technically would represent different specs.


I'm not talking about the standards, but the marketing names. "USB 3.0" speed is now "USB 3.2 Gen 1" (or "USB4 Gen 1") speed just because the USB Consortium said so.


3.0 wasn’t the speed or the feature. It was an engineering spec with a lot of features, optional and required.

3.1 took 3.0’s features and added more optional features to make a larger document.

3.2 likewise.

You are likely thinking the actual feature marketing names. Things like USB-C connectors and Superspeed 20 Gbps. These do not change release to release. They also might require conformance testing to use those names.

I actually blame the current mess on PC motherboard manufacturers for wiring up a crapload of non-conforming ports, like a “USB-A Gen 2x2” with a red plastic tab. IMHO thy did this because nobody wanted to take the risk of actually pushing toward USB-C. It left them without a way to use a certified/marketing name, hence pretending engineering names were appropriate.


> I'm not talking about the standards, but the marketing names. "USB 3.0" is now "USB 3.2 Gen 1"

No, it's not.

If you implement it from the legacy USB 3.0 spec then you don't care about it. It's SuperSpeed, and that's it.

If instead you implement it to comply with the USB3.1 spec then you have two separate transfer modes specified in the 3.1 standard: the legacy Gen1 and the newly-added Gen2.

If instead you implement it based on the USB 3.2 spec then that standard specifies four distinct transfer modes: the Gen1 specified in USB3.2, the Gen2 specified in USB3.2, and the new ones.

> just because the USB Consortium said so.

Who exactly do you think the USB consortium is? I mean, how do you think a standard is put together?


> No, it's not.

Yes, it is.

> If instead you implement it to comply with the USB3.1 spec then you have two separate transfer modes specified in the 3.1 standard: the legacy Gen1 and the newly-added Gen2.

No. If your device only supports 5 Gb/sec speeds, it's USB 3.0, yes. "SuperSpeed" and all that jazz. But with USB 3.2, it's now (magically) USB 3.2 Gen 1[0]:

> Under this rebranding, the standard previously known as USB 3.0 or USB 3.1 Gen 1 will now be called USB 3.2 Gen 1. Furthermore, the standard previously known as USB 3.1 Gen 2 will now be renamed to USB 3.2 Gen 2.

Yes, there's different transfer speeds, but if you only support 5 Gb/sec, you're a "Gen 1" device. If you're arguing that implementing USB 3.1 mandates support of the 10 Gb/sec mode, you're wrong. If that was the case, there'd be no point of this "Gen" nonsense because a 20 Gb/sec device would just be "USB 3.2" and a 5 Gb/sec device would be "USB 3.0".

Remember the whole debacle a few weeks ago about HDMI 2.1 essentially just being HDMI 2.0? Why would they do that other than to confuse? The only reason for this (USB) stupid naming is to confuse consumers into thinking that their 5 Gb/sec device is "top of the line" because it supports "USB 3.2" or "USB4".

For example, here's a "USB 3.2 Gen 1" flash drive.[1] It's a 5 Gb/sec flash drive, but it's 3.2 instead of the more appropriate 3.0. Why? To confuse.

> Who exactly do you think the USB consortium is? I mean, how do you think a standard is put together?

I think it's a consortium of companies. Many of which have marketing teams. And I'm right.[2]

[0]: https://www.msi.com/blog/new-usb-standard-usb-3-2-gen-1-gen2...

[1]: https://www.amazon.com/SanDisk-128GB-Ultra-Flash-Drive/dp/B0...

[2]: https://www.usb.org/members


> No. If your device only supports 5 Gb/sec speeds, it's USB 3.0, yes.

That's not how things work.

Devices are implemented while targeting a standard.

If you implement a USB 3.0 device then you do not support any data transfer mode capable of doing more than 5Gb/s. If you're a customer looking for more than 5Gb/s and you see that a device is only USB3.0 then you already know that it won't cut it.

That's the whole point of this submission. M1 macs don't support USB 3.1, only USB 3.0. Why? because they patently don't support the transfer speeds made possible by the new data transfer mode introduced in USB 3.1.


> That's the whole point of this submission. M1 macs don't support USB 3.1, only USB 3.0. Why? because they patently don't support the transfer speeds made possible by the new data transfer mode introduced in USB 3.1.

M1 macs support USB4.

USB specs define multiple transmission modes and speeds from port to port over a cable that one can support. They define alt modes you can support.

Separately there are conformances and marks. E.g. if your cable supports transfer according to USB 3.2 Gen 2x2 in our lab, you can _market it_ as Superspeed 20Gbps, put the logo on the connectors, etc.

So the argument would be that Apple M1 doesn’t support Superspeed 10Gbps.

Which, as an aside, I’ll need a lot more than one person testing with a single (likely non-conformant) cable before I will believe.


It's the name of the standard. I'm not sure that those names were ever meant to be user-facing, but unfortunately they are. If device-makers choose to support a newer standard (say 3.2), that standard needs to support older speeds (Gen 1), in addition to newer speeds (Gen 2).


But USB 3.0 supported USB 2.0 and 1.0/1.1 speeds already without this "generation" garbage. If I plugged a USB 3.0 cable (9 pins) into a USB 2.0 (4 pin) hub, the device still worked at the lower speeds. I could even plug it into a USB 1.1 hub, and it would just work. I didn't need "USB 3.0 Gen 4"[a] (3.0) to know that it would work at "USB 3.0 Gen 2"[a] (1.1) or "USB 3.0 Gen 3" (2.0) speeds.

[a]: Made up names; USB 3.0 didn't have this mess


Sure it did.

You have USB 3 Low speed and Full Speed (aka usb 1), USB 3 High Speed (aka usb 2), and USB 3 SuperSpeed.

Expecting the USB consortium to give things useful names or at least let them keep their names we got used to is the same madness as expecting a singular useful version number from anything Sun derived.

Anyway, according the article everything links at USB 3.1 Gen 2 SuperSpeed+, but then usually doesn't send data at anywhere near the link rate, so that's not an extra layer of confusing.


That was a different mess that people ignored entirely.

The x.y numbers were not a mess until 3.1


> But USB 3.0 supported USB 2.0 and 1.0/1.1 speeds already without this "generation" garbage.

No, not quite. What do you think the USB3.0 SuperSpeed is? Why, a brand new transfer mode.

> If I plugged a USB 3.0 cable (9 pins) into a USB 2.0 (4 pin) hub, the device still worked at the lower speeds.

You'd be glad to know that nothing changed in that regard with USB3.0, 3.1, and 3.2.

In fact, the whole point of this submission is to showcase how M1 macs are only capable of drawing a lower data transfer speed unlike the new Mac Studio, thus proving that the M1 macs don't support USB 3.1 Gen2, aka SuperSpeed+.


You keep dancing around my arguments. The issue isn't that things have changed; it's that they've changed in a way that makes things confusing for consumers. Go ask a random person on the street which is better: "USB 3.2 Gen 1 or USB 3.0?" I guarantee you'll find people thinking "USB 3.2 Gen 1" is better because it's a bigger number. But despite that, they're the exact same thing: 5 Gb/sec ("SuperSpeed").


> You keep dancing around my arguments.

No, not really. Feel free to point out exactly which argument you feel was ignored.

> The issue isn't that things have changed; it's that they've changed in a way that makes things confusing for consumers.

That seems to be the source of your confusion: nothing has changed. Each USB spec is backwards compatible and specifies the same data transfer modes.

And there is no confusion: if you pick up a USB2 data storage device you know beforehand it won't support SuperSpeed. If you pick up a USB3.0 device you know beforehand it won't support SuperSpeed+. If you pick up a USB3.1 device you know beforehand it won't support SuperSpeed+ 2x or 4x.

The whole point of the submission is to call out that M1 macs don't support USB3.1 unlike the new Mac Studio.

The article also clearly states that Apple doesn't actually advertise USB3.1, just USB3.


> Feel free to point out exactly which argument you feel was ignored.

The retroactive renaming of speed+versions. I'm not talking about the Mac.

> If you pick up a USB3.1 device you know beforehand it won't support SuperSpeed+ 2x or 4x.

My whole argument is that this confusion wouldn't be an issue if the USB Consortium had reserved USB 3.1 for 10 Gb/sec speeds exclusively. In other words, this:

    3.0: 5 Gb/s  "SuperSpeed"
    3.1: 10 Gb/s "SuperSpeed+"
    3.2: 20 Gb/s "SuperSpeed++"
That, and that alone (with none of the "Gen" nonsense) would avoid confusion. Then, if I pick up a USB 3.1 device, I would know it's 10 Gb/sec "SuperSpeed+" without having to use a stupid "generation" number. But no, the USB Consortium decided to deprecate 3.0 and 3.1 because all new devices are "3.2 Gen whatever". That's confusion.


Versions are not speeds.

3.2 continues to describe everything in 3.0, which means it continues to describe how to make devices supporting 5 gbps over USB-A/B


Well, the argument is that versions not being speeds anymore is the problem and it would've been easier if they were. Like they are in Wi-Fi for example.


If you use the smallest version number that fits your device, then you avoid confusion.


Your vendor should never have said 3.0 or 3.1 or 3.2. They should have said Superspeed.

There’s no point complaining that the engineering spec versioning strategy is confusing, when no consumers should have been exposed to it. The problem is squarely on manufacturers and the tech press.


The Mac Studio is an M1 Mac, so you might want to rephrase that part.


Huh?

> the implementations that supported SuperSpeed transfer rates were dubbed USB 3.1 Gen1, while the implementations that supported SuperSpeed+ transfer rates were dubbed USB 3.1 Gen2.

This is supposedly better than USB 3.0 (original standard), USB 3.1 (new standard, same SuperSpeed as USB 3.0), and USB 3.2 (new standard and also SuperSpeed+).


> This is supposedly better than USB 3.0 (original standard), USB 3.1 (new standard (...)

Not quite.

* USB 3.0 specifies SuperSpeed. No need to go on about GenX given it's the first one introduced by USB3 is there?

* USB 3.1 specifies two data transfer modes: Gen1 (the one introduced in USB 3.0) and Gen2 (the fancy new mode just introduced).

* USB 3.2 specifies the Gen1 and Gen2 modes from USB3.1, and adds two additional modes.


USB 3.0 also specifies lower speeds. They didn't need to use "Gen" then, and nothing changed to make them need it after.

Nobody cares if multiple speeds are "introduced by USB3". If 3.0 introduces one speed, and 3.1 introduces a different speed, people can understand that just fine.

Even if you do want to focus on "introduced by USB3", then you just need "3.[generation]" or "3 Gen [generation]". Not "3.[spec revision] Gen [generation]"


This is interesting, because I see your point and this is a good breakdown of the current naming scheme

… but it still seems indefensible. This comment almost reads like satire. I know standards are hard, really hard, but this seems indefensible. Especially the Superspeed -> Superspeed+ (this is ridiculous). Will this get simplified with USB4?


Wait, am I reading this right?

The old transfer mode was

> Superspeed

But the new mode is different, it’s name is

> Superspeed+

I’m sorry, I take your point in the first paragraph but I can’t find a way to wrap my head around how this system helps anyone.


Found the USB-IF member.


I hope USB4 fixes things.


It does give consumer friendly marketing names (USB4 20Gbps) to the different Gens, but the Gens are there: USB4 Gen 2x1, 2x2, 3x1, 3x2... https://en.wikipedia.org/wiki/USB4#USB_3.x_.E2.80.93_4.x_dat...


Until USB4 2.1 gen3 comes along.


Meh, imma wait for USB4, Gen 3.6, 4x4 with HDMI 2.2.1 support.

/s


On which HDCP won’t work on televisions manufactured earlier than yesterday.


I think you mean tomorrow. Just keep waiting for it. It'll be worth it. I promise.


It's going to be on the blockchain!


Not exactly, the generations have been renamed, so the first two are called USB 3.2 Gen 1x1 and USB 3.2 Gen 2x1 respectively. There is no such thing as USB 3.2 Gen 2.

But yes, they should have just had the version determine the gen, so a 3.0 port is always gen 1, a 3.1 port is always gen 2, and a 3.2 port is always gen 2x2. Or alternatively, skipped the version increment and marketed the new protocol as "USB 3 Gen 2", with no minor version. But now we have devices with "USB 3.1" printed on them even though they can only do 5 Gb/s.


Wait until you read about old scsi standards.


I quit reading halfway through this in frustration even though I really, really need this information.


Clear as mud.


The USB people decided to have new sub versions replace the older standard rather than extend it. USB 3.0 was replaced by USB 3.1, and then again replaced by USB 3.2. If you think of it in a bureaucratic sense, it starts to make sense (something receiving classification A under the Widget Law of 2012, classification B under the Widgets Etc. Law of 2014, and classification Å under the Widgets Standardisation Law of 2018). The problem is that they forgot to inform the general public how their versions work, and the general public also doesn't (and shouldn't need to) care about their weird versioning system.

As an end user, you shouldn't need to care about the generation and amount of lanes that USB provides. There's a logo on the box that has the USB shape, SS, and the speed. For textual representations, there's "SuperSpeed x gbps". If you see SS5 and your computer also had a sticker that says SS5 or SS10 or even SS20, you're good. That's how the USB guys foresaw the whole process going. Obscure use cases that may technically happen (like two gen 1 streams rather than a single gen 2 stream, aka USB 3.2 gen 1x2 rather than USB 3.2 gen 2x1) don't get a logo or a fancy name and you'll probably run into compatibility challenges for such a setup.

USB X.Y gen ZxQ isn't meant for the general public, it's a technical specification. x specifies the generation of the version y specifies which specification, z specifies the type of connection (given X.Y = 3.2 and above) with q specifying the amount of USB lanes available.

Like PCIe 4.0 1x16 vs 2x8, it's all designed as designations for domain experts. Use the SuperSpeed naming system like the USB people want you to and you'll probably be a lot less confused. They tried to simplify things (again) by dropping the SuperSpeed in USB 4, calling it simply USB 4 xx Gbps, which is probably for the best.


The xSpeed naming was even worse. Full Speed isn't actually the maximum (full) speed (that's SuperSpeed#, for some number). High Speed snuck in there at some point. I'm surprised they never did Ludicrous Speed with plaid cables.

The USB4 xx Gbps is good. The thing people care about is "which one goes faster". Numbers make that easy, vague adjectives for ever faster speeds make that hard. And it looks to improve the "not all cables work for all applications" situation that USB 3 ended up with.


Thanks, very well explained. It's starting to make more sense now.


I don't think it's an accident. I think manufacturers pressured the body to ensure they can always claim compatibility with the latest standard. For that purpose, version number became irrelevant and the profile name ("Gen 2") became important.


Same thing happened with HDMI 2.1, all new features were defined as optional so everyone who only supports the 2.0 feature set can claim HDMI 2.1 compliance


It does make sense that all new features are optional. E.g. 8k resolution does not need to be supported by your TV/monitor or VRR does as well not need to be supported by a Bluray player.

As long as the devices can communicate and agree on signal, resolution etc. it's all fine.

If anything is a mess, it's the cables, where you have to be careful to read whether high resolutions or ethernet or what passes through that cable is supported.

If anything, it showed Verge (I think it was them who published that article) don't understand the purpose of the standard and intentionally tried to create controversy, where really isn't one.


Maybe that makes technical sense on paper, but in my experience, all these things quickly add up to it being impossible for a consumer or system integrator to know, beforehand, if something is going to work. I just setup a monitor with a built-in KVM, and it's practically impossible going by the specs supported (if you can find it) and whatever is in or not in the manuals to know a setup will work without just trying it. You have the monitor, webcam, two computers, keyboard, mouse, and other peripherals, and then you need to juggle HDMI, USB 2.x and 3.x, Display Port, and Thunderbolt. That doesn't even take into the fact that USB 2.x and 3.x are essentially physically incompatible due to interference at the connector site if they're nearby each other. There's also the myriad of ways on how something doesn't work as expected (4K video only over USB3.x or or a USB 3.x hub not having USB 2.x backwards compatibility or whatever else).

So, thousands of dollars of equipment working together cannot possibly be verified before purchase. It's insane.


There’s gotta be some lobbying or consulting firm hired by manufacturers that’s using the same strategy between the two standards.

The consortiums don’t seem to realize that short-term profits are coming at their expense of their brand.


There's no need for lobbying by manufacturers since the consortiums are directly controlled by the manufacturers. If the brand looses value, they switch to the next one.


Why would it come at the expense of long-term profits? The EU has given a legal mandate to USB-IF, you literally are not legally allowed to use anyone else's connection standards. What "profit loss" could possibly occur under a literal government-mandated monopoly requirement?

Display standards aren't quite there yet but HDMI is effectively a monopoly in the living-room space and most people use them for mass-market (low-end) monitors that are the most common. Effectively it might as well be a mandate, especially when DisplayPort comes with much shorter cable lengths/etc that require equipment reconfiguration.


My belief is that more individuals would be seeking out Thunderbolt accessories over USB.


When you're the only game in town, brand doesn't matter so much.


display port?


Not just can, they're only certifying "2.1" now or something like that.


Yes, they said that 2.0 doesn't exist anymore:

1. HDMI 2.0 no longer exists, and devices should not claim compliance to v2.0 as it is not referenced any more

2. The features of HDMI 2.0 are now a sub-set of 2.1

3. All the new capabilities and features associated with HDMI 2.1 are optional (this includes FRL, the higher bandwidths, VRR, ALLM and everything else)

4. If a device claims compliance to 2.1 then they need to also state which features the device supports so there is “no confusion” (hmmmm)

https://tftcentral.co.uk/articles/when-hdmi-2-1-isnt-hdmi-2-...


It's absolutely not an accident. Given they've actually changed naming repeatedly, this move can only be interpreted as being a deliberate scheme to swindle consumers.


But then you wouldn't be able to label-engineer your old chips into "the latest version" (of the old feature subset).

Isn't "USB 3.1 Gen 2" the usb 2 performance level as (re)specified in the USB 3.1 document set?

I'm deliberately writing without involving a search engine, to make my reply an authentic sample of the resulting confusion. My memory is probably wrong - and if it isn't, the lack of confidence still serves the purpose of illustrating the confusion.


The generations start with USB 3.0.

But for bonus confusion they don't refer to connection speed. They refer to single lane speed, and a connection can have 1 or 2 lanes.

Gen 1 is 5Gbps, gen 2 is 10Gbps. Gen 2x2, sigh, is 20Gbps total. If they weren't trying to obfuscate things, those would simply be 3.0, 3.1 and 3.2.

There's also Gen 1x2 in theory, but I've never seen mention of anything using it.

And then after USB 3.2 you have USB4[sic] which adds Gen 3 and 3x2.


So it's basically like PCIe now.


Like that, but also they keep renaming things. And when it's only two lanes max and any faster device will support two lanes, there's no reason to put lanes in the name.


As far as I know - 3.1 Gen 2 is what 3.1 would have been called if they named it properly, 3.1 Gen 1 is equivalent to 3.0. (the 3.0 line is 3.2g1/3.1g1/3.0, the 3.1 line is 3.2g2/3.1g2, and 3.2 is 3.2g2x2?)

At this point I'm certain the confusion is intentional


I think it would be fine IF the higher numbers were universal supersets of previous versions of USB. 'i don't care what this version is it's the highest" seems fair. At least the "universal" aspect applies over time.

It's the apples of the world who seem to be on divergent fork with superior and inferior types of functionality that really disrespect the "universal" moniker... While also ambiguating exactly which USB spec they are even implementing.


Apple has always been 'vague' in their material about what you're getting, exactly. Often a lot of digging, and / or research needs to be done.

Don't start me on the Pro / Max / Ultra... which one is the best? Which one _should_ be the best, given the moniker of 'maximum'?


I think the should really just use sha hashes of the standard document, to avoid all this version name issues.


Sadly this might be more user friendly then the current setup.

Even just a bit vector of the capabilities, expressed as a decimal or hex number, would be more comprehensible.


You would then get USB Type 61513 port and USB Type 48757 device, which could only communicate at Type 45151.


Yes you would. But that doesn't mean you can't represent it with 16 nice graphics that clearly show if something is supported or not.


At least you’d know what you were getting.


I think they should use the Chrome/Firefox numbering system. Every time you put out an update that's big enough for users to be affected, increase the integer by one. It's the most intuitive system to humans.


The Chrome/Firefox numbering system is “new number every month”. Most of those updates don’t bring any new notable features.


No, "Essay FINAL (final version 2)(1).docx"


At least Wi-Fi folks seem to have taken a hint with Wi-Fi 6.


They lasted less than a year before putting out Wifi 6E


Didn't last long, haha, the new one is 6E.


That one is at least a little more understandable, same phy, just operates in a different spectrum.


I guess my point is it's meant to be a consumer marketing name - and if I have to explain to them that "6E" has an "E" because it's the "same phy, but operates in a different spectrum" then they've failed to create a rational naming scheme yeah? Don't get me wrong, I get it. But I also get USB 3.2 Gen 2x2

Like ... why "E" of all letters?


extended?


How did they fuck this up already /o\


Then there's the additions made by the Mainboard manufacturers like Asus.

For example, the manual of the Pro WS X570-ACE states that:

"USB 3.2 Gen 1/Gen 2 devices can only be used as data storage only [sic]"

In any case, if you connect a keyboard, mouse or USB hub to them, they'll work.


They're in an arms race with the HDMI Forum to make the most idiotic versioning scheme.


The naming seems strange but the idea is sound, right? Perhaps they should have picked a different marketing name for each of the Gens. Anyway, it's like how we have: C++ 17 map and C++ 17 unordered map. Different functionality is specified in the same versioning standard.


The Final Season, Part 3


The USB-IF is a group consisting of implementers (IF = Implementers Forum) and obviously it's more profitable for them if they can put "USB 3.2 support" on the box, but keep using the same hardware (newer and more capable => more expensive => lower profit). So in a perverse way - the USB-IF actually has a financial incentive to keep making more confusing standards, because consumer confusion can be exploited for higher profitability.

This is why it's baffling to me that the EU wants to hand this particular forum a legal monopoly on the connector and protocol standards we all have to use in our devices, and that people actually think that's a super good idea that will work out well.

At best they're an incredibly logjammed body, the only reason Lightning exists is because USB-IF couldn't settle on a replacement for USB Micro-B and after years of debate and still no progress Apple finally just went around them (which lit a fire under them to pass USB-C finally). Kinda similar to the way VESA waffled on the Adaptive Sync draft standard for years and years until NVIDIA just finally went around them and implemented their own VRR sync standard, which finally gave the other members the incentive to shit and get off the pot.

Even still, the only thing they could agree on was a "kitchen sink" standard that's ridiculously cumbersome and expensive and complicated - USB-C cables require hand-termination and will simply always be more expensive than an equivalent machine-assembled lightning or micro-B cable [0], and most cables (and for that matter, most devices/connectors/etc) don't support the functional capabilities promised by the USB-C advocates. One cable doesn't do everything, and most devices and cables in fact won't do most of the things that USB-C is capable of delivering. The standard is in fact so complex that most cables get it wrong to a dangerous degree (devices can be destroyed by improperly-implemented cables) as one of Google's engineers discovered [1] when he started systematically testing cables for standards compliance. And the negotiation is more complex and expensive to implement as well, even if you're only doing a low-capability device. There are just fundamental problems with trying to have a standard that covers everything between 1.5 mbit HID devices and 40 gbit PCIe tunneling in the exact same standard and connector and cable (it sounds absurd to even say that).

These standards bodies often don't work very well, and handing them the keys to the kingdom basically guarantees at least some degree of slowed progress and increase in market deadweight. If we mandated that everyone adopt VESA and that HDMI would be illegal to sell in the EU - that would produce similar deadweight loss in the display connection standards as well. Etc etc. IMO while you do want standards - you want at least two standards bodies so that you don't end up in the USB-IF situation where one of them has a legal monopoly.

(not that that's a panacea - by the way - Apple also is advertising HDMI 2.1 and yet they only support 4K60 [2], they are another one of the companies who took advantage of the HDMI Consortium's new standards branding where they do the same thing as USB and HDMI 2.1 can actually be a rubberstamp reapproval of HDMI 2.0b hardware. [3])

I think a lot of people get really wrapped up in the whole "android vs apple" thing and don't really think about what exactly they're doing, they're just happy "their side" won. But in the long term it's going to be far more difficult to actually get improvements passed, because it's legally impossible to produce innovative products that exist outside USB-IF standards if you want to sell in the EU, and USB-IF just doesn't work at all.

[0] https://www.youtube.com/watch?v=Y1Tmtd51clI

[1] https://arstechnica.com/gadgets/2016/02/google-engineer-find...

[2] https://www.apple.com/apple-tv-4k/specs/ (they actually seem to have rolled back the "HDMI 2.1" advertising on a lot of devices but they missed a few! There is, technically, a footnote however.)

[3] https://www.youtube.com/watch?v=qo9Y7AMPn00


> One cable doesn't do everything

Well, seems like because physics™ it's hard to do both 420W charging and 69Gbps data over the same cable. And also people want thinner and cheaper charging cables. It's still better to have a unified connector and different capability cables.

> most cables get it wrong to a dangerous degree

That was the case in the early days. These days it's getting hard to find bad cables.

> negotiation is more complex and expensive to implement as well, even if you're only doing a low-capability device

No need for negotiation in a low capability device. Two resistors. It's not rocket science. Raspberry Pi did screw up and use one resistor but they made that mistake so famous that everyone knows not to repeat it now.


[flagged]


> Even software that is directly ported has trouble because M1 tile memory cache is designed for minimal tasks, not heavy computing, and computationally heavy software has to be re-written from scratch to optimize it for the platform.

Huh? I have no idea what this is supposed to mean. Tile based memory refers to the GPU, and you don't have to rewrite your GPU software from scratch, that's insane? That's what Metal is for. Whatever this is supposed to mean I'm pretty sure it's not even wrong


I think they mean that GPU applications that are not optimized for TBDR may experience reduced performance.


Even then, “M1 tile memory cache is designed for minimal tasks, not heavy computing” is such a stupid thing to write…


> But Apple's M1 based computers are terrible console-like things that can't do what's expected of a computer.

I understand that in an abstract context, and I guess it depends on your day-to-day needs. For me, I was incredibly excited to get an M1, and that lasted about a day until I realized that it was just doing everything I was doing on my Intel Mac and lot faster. It doesn't even occur to me that I'm using an M1 because I have had zero speed bumps. The only time I've even been reminded of having an M1 was when the computer told me I needed to install Rosetta to run a program.

My M1 does everything I expect a computer to do. So much so that it's disappeared and I'm just back to "is there a Mac Native version?" for x.

I wonder if the comments were the same back when transitioning from the 68k to the PPC.


> I wonder if the comments were the same back when transitioning from the 68k to the PPC.

Pretty much. There were exceptions, things like early image edition programs that were slow on PPC Macs because of 68k emulation, but for most people it wasn’t an issue and anything advertised as Mac-compatible ran just fine. Again, in the worst case emulation corresponded more or less to the performance of a high-end 68040 CPU (without its maths coprocessor, which was an issue sometimes).


“They have to emulate most software.”

Well that is simply not true.

https://isapplesiliconready.com

Notably, MS Office and Adobe’s stuff is compiled for ARM.


All gripes I don’t have, and my m1 max is a kickass laptop, expensive but you get what you pay for. And I have quality. Reliable, clean and fast, both on the software and hardware side.


We've got ~100 engineers in our company and there are essentially two categories of engineers: the ones that are enjoying their M1s and those who are looking forward to moving from Intel to M1.

Admittedly that's still pretty anecdotal. It's not like each one of those engineers has rigorously tested all alternatives.

But still, no complaints and people are loving the battery life and lack of heat.

The dongle situation is unchanged from 2016-onward Intel Macs and has been resolved with the latest MBP models. The dongle situation was absurd but never thaaaat bad; you just had to buy a $70 dock dongle thingy.

Single external monitor support on the lower-end models was lamentable, but also not a dealbreaker in an era of ginormous 4K displays.

Lack of M1 support in software is almost a non-issue. Rosetta emulation is so fast that nobody notices or cares. The only exception has been our Docker-based stuff; took some massaging to support the new arch.


My M1 Mini is the best computer I've ever owned, bar none. Most of that junk you are whining about is irrelevant niche stuff. Nobody GAF if they can replace the internal SSD, or if there's any difference between replacing the SSD and replacing the entire motherboard. It has two USB ports, two USB-C/TB3 ports, HDMI, ethernet, bluetooth, wifi, and a headphone barrel jack, like any other computer. WTF are dongles?


I think the guy was just trolling. Esp that TLB stuff thats quite speculative and if true means memory hopping, write heavy, gpu bound apps can't squeeze all of the fastest, niche version of m1 ultra. I mean, thats just ridiculous. Its obvious to any programmer that squeezing all from a processor requires optimizations, usually for rapidly diminishing returns.

My m1 max laptop is so good that if it broke today, I'd lament having to wait for a replacement, but wouldnt hesitate. Felt the same way about m1. And thats coming after three decades of low level programming work on dos, windows and linux x86 with the best hardware money could buy.


If you've hung around some of the previous M1 threads, I really don't think he is trolling. That's actually how some people feel about it.

I gotta say it's been really wild watching "epistemic closure" in action. There are clearly two different sets of facts and two different realities based on them that have emerged at this point, one where M1 is slower than x86 and requires everything to be emulated and USB doesn't work and you can't use external displays and it's a closed and locked-down system that is NEVER going to get a linux port, and one where it's got massive IPC and great battery life/etc and everything basically Just Works and the linux people are thundering along and have a basically workable system. People are living in like totally different worlds, one where it's just more apple trash and one where it's the best thing since sliced bread and actually better than the x86 alternatives.

I dunno if it's people who just haven't used it, or people who are ideologically opposed to apple's designs, or just super hate macos, or what, but even in the benchmark space it's become completely bimodal, you've got Anandtech where it's showing up to 3.5x the IPC at 0.7x the clock of competing designs in SPEC cpu benchmarks suites, and you've got Hardware Unboxed where it barely edges out current designs at iso-power even in ARM-native builds of Cinebench.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...

https://www.youtube.com/watch?v=X0bsjUMz3EM&t=726

tbh I've been thinking something similar in general - the synthetic stuff seems amazing, but it does seem like it has some problem actually putting power to the ground in real applications. Cinebench is native, so that shouldn't be a factor, and it should be something that makes great use of that absolutely wild FP32 IPC, so why isn't Apple running away with it anywhere near as much as SPEC shows it should? There's something weird there and I can't quite puzzle it out.


It really is fascinating. No matter how much I say otherwise people always interpret my criticism of the M1 computers in terms of speed. Like I'm denying it's fast and that's the only thing that matters. I'm not and haven't been saying M1 computers are slow. They're obviously, undeniably, fast.

I'm saying they're incomplete in terms of hardware/firmware.

It's a critism of the hardware not implementing full protocols so that devices work and so that you can change and upgrade your computer. Some people don't want or need to do that, just like I'm not very good with automobile maintainance and don't care about parts availability for my car or that it has all electrical ground points in the engine compartment inaccessible behind plastic. Someone else can deal with that. And that's what Apple computers are. Bad computers that are someone elses problem.


I'd guess people who used workflows causing their x86 laptops run hot and loud felt very relieved when switching to dead-quiet-yet-quite-fast m1's. I am one of those people. I travel a lot, and using a $5k xeon laptop meant bringing a heavy 180w brick on flights, and lugging it everywhere with me. And suffering from loud fans. Especially when having a teams or hangouts call with people sharing video. Or using docker compose. Switching to a machine that can do it all, while being quiet and lasting a, say, AMS-SFO flight without charging, felt great.

As for usb3 speeds, I mean, oh well. I probably wouldn't notice if the ssd runs at 400 vs. 500mb/s. Minor speed differences are less noticeable than quality of life things like noise, battery life, screen quality, stuff like that. To each their own.


I don't think those final two things are necessarily contradictory. It's very possible to have high theoretical IPC that applications cannot exploit.


That some applications cannot exploit. Because it’s also possible that other applications actually do. Cherry picking application to discuss performances on a given architecture is not really better than cherry picking benchmarks.


The apple apologists are in full swing in this comment section. $2000 laptops should do exactly what they claim, full stop.


The fact that I can't connect more than one monitor to my M1-based iMac is a fucking travesty.


It must be to have posted the same comment twice.

Did you do any research in to it before buying it? Apple was never misleading about how many monitors it could support. The "Tech Specs" product page says it can power its internal display and one extra external one. Just because you want it to do something doesn't make it a travesty when it can't.


When my current CTO at Merkle Science told me that employees got M1 MacBook Pros, I politely asked for a ThinkPad instead. He obliged, winning me over. I was the only employee using Ubuntu for a while. Today I use my ThinkPad P14S AMD Ryzen 7 5600 in a 4 display setup. It is beautiful and I use it with qtile. While I can work with 2 displays, I'd rather have both be external displays than have to use mismatched displays. I used to use my Macbook 2018 that way at Visa. However I loathed that machine because of how many problems I faced with it and how mental the USB C hell was. Having to use istatmenu to turn on the fans manually at full speed was unacceptable. Having to only use ports on the left or right side was unacceptable.

I don't ever want to use a macbook ever again. I am certain I will have to since many tech Bros think that it is the best experience out there, and the alternative is Windows which is even worse. I hope every company realizes giving your developers the machine or the OS of their choice is not a bad thing. Stop forcing Devs to use X just because your IT is saying so. Let the developers do their jobs in the best possible way.


Were you expecting to be able? The tech specs say "One external display".


Not sure this is fair to the consumer. Apple was advertising usb-c as one port to rule then all. When the device has two of these ports, why should the user expect not to connect two displays?

Especially since this restriction didn’t apply to the previous models that looked exactly the same and are now advertised as less powerful.

It would be natural to not explicitly look for how many displays it can handle.


Back in 2011 you could use a second iMac is a monitor in target display mode, that went away, why should the user expect not to be able to use target display mode for every iMac? Things change unfortunately and the single display restriction of M1 was advertised heavily online and can be a pain point but it's listed online as a spec.


Good point, though I’d argue more people use two screens on a MacBook than an iMac as a screen, since it already provides a computer.


My initial suspicion would be the user used bad cables. There's many cables that say they're capable of a speed when they are not in fact capable.


The issue is not a cable thing. I have experienced the same behaviour on both a 2020 intel MBA and a 2021 M1 Pro; a 10GB/s rated Samsung T5 SSD only connects to either Mac at 5GB/s, despite being rated for 10GB/s. The same SSD (and cables) connects to Linux & Windows boxen at 10GB/s.

Equally, a Crucial X8 SSD rated for the same 10GB/s connects to both Macs at its rated speed.

This behavious has persisted across OS updates since Big Sur.

My current thinking is that there's some undocumented quirks in place in MacOS that scales speeds down with certain USB controllers.


Sounds pretty clearly like Apple thinks certain devices are non-standards compliant. And just because it works at full speed on Linux & Windows doesn't mean it's standards compliant.


Alternatively Apple is not standards compliant. Or it could just be a bug in their driver or controller. Not sure why it makes more sense to assume Apple is doing it right and everyone else wrong.


Why bother reading the article when you can just slam out hot takes?

> Cables used included a certified Thunderbolt 4 model, and the USB-C (data) cables provided with the cases. Again, each was verified by establishing SuperSpeed+ 10 Gb/s connections to an Intel Mac.


Because we all know unnamed brand / model “certified” cables with drives that are also unnamed brand / model could surely not be the problem?

Edit: I did read the entire article. Like others have said here, it doesn’t match with my experience. But because nothing is clearly specified as to what’s used, the article is useless because nobody can compare to see if their setup is the same or different.


[flagged]


Just because it works on an Intel Mac does not mean the cable isn’t the issue! Happy to demonstrate in a live video along with oscilloscope readings to prove it is in fact the cable to blame.


I would like to see this. Mostly to see what kind of an absolute beast of an oscilloscope can keep up with the data rate.

Also I don't understand why people are disagreeing with you. If a cable is not within spec it's absolutely possible for it to work in one circumstance because for example the port is more lenient and completely fail with a port that is more strict.


> Mostly to see what kind of an absolute beast of an oscilloscope can keep up with the data rate.

You can prove a cable faulty without being able to fully observe/measure the entire frequency range at once.

> Also I don't understand why people are disagreeing with you. If a cable is not within spec it absolutely possible for it to work in one circumstance because for example the port is more lenient and completely fail with a port that is more strict.

I’m kinda baffled too. I thought this was common knowledge among the folks on HN, but maybe not?


What's your theory as to why the cable would be at fault, despite working perfectly well in a different machine?

Seems like the simplest counterpoint would be to demonstrate a M1 mac transmitting at the full data rate using a different cable.


> Seems like the simplest counterpoint would be to demonstrate a M1 mac transmitting at the full data rate using a different cable.

Which I’d be happy to do assuming I knew the exact specs (brand/model) of the drives OP was using.

If you’d like me to demonstrate that MacOS on Apple Silicon correctly supports USB 3.2 gen 2 devices at 10Gb/s using multiple brands of drives/models, happy to do as well.


I really like (careful, sarcasm) that about USB-C and HDMI. You have a port, you have a cable. But if you're not a Syadmin you have no idea why it does not work as you think it should.

I can't understand why they did not put some kind of capability mark for cables/ports into the spec.


> I can't understand why they did not put some kind of capability mark for cables/ports into the spec.

You're talking about a completely different thing from cables that claim to handle a certain speed but aren't built well enough to actually do so.


If that was the case, the front non-thunderbolt ports would not be faster, but they are.


Getting USB 2.0 speeds strongly suggests that the cables somehow had horrendous signal integrity...

Anyway I grabbed a 3.1 gen2 SSD and plugged it directly into my 16" M1 Pro (Sandisk Extreme), and I got 850 MB/s read/write with his tool.

And that's supposedly one of the exact drives and laptops he tested. So something is going wrong for him, and it's 90% likely to be cables.


It doesn’t seem particularly likely to be the cables when the same cable plus same external drive but with an Intel Mac results in the expected performance numbers.

In your test, did you use a TB4 cable or a USB data cable?


Just because a marginal cable works on one machine doesn't mean it'll work on another.

I used a USB cable, but I retested with an active TB4 cable with the same results. And again with a passive TB3 cable to the same results.


If the cable is wrecking signal integrity, the port could absolutely change things if the ports have slightly different hardware that's generating the signal. RF does weird things.

Just as a very simplistic analog DC example, if you have wires that are both sitting at 3V but one can only drive 100 mA while the other can drive 1A. If you only draw 50 mA they'd look near identical but as soon as soon as you tried to get close to 100 mA they'd both diverge heavily. You could advertise both at 50 mA for sales purpose but if they were hooked up with a bad high resistance cable they'd look very different.


As a less technical anecdote, I bought one of those usb "sound cards" and it had noise when plugged into the front usb ports but worked fine into the back ones on the same PC.

The sound was transmitted digitally and didn't need that much bandwidth, so the only explanation is "dirty" power on the power wires, I think?


Why did they work with the Intel Mac then?


> In normal testing, 5 Gb/s should yield around 500 MB/s

Wow 90% overheads?


5 gigabit/s = 0.625 gigabyte/s = 625 megabyte/s

so about 20% overhead... I would not be dissatisfied with 500 MB/s result on a theoretical 625 MB/s bus.


FWIW it's actually 500MB theoretical (ideal), because USB uses a 8/10 encoding[0], until 3.1 Gen 2 which switches to a 128/132 encoding[1].

So you have a 5Gb/s physical signal, then a 25% encoding overhead, for a raw data throughput of 537MB/s.

Once framing and protocol overhead are taken in account, mid-400s effective is probably a good result.

[0] https://en.wikipedia.org/wiki/8b/10b_encoding

[1] https://en.wikipedia.org/wiki/64b/66b_encoding


What sucks, however, is that you can transfer 1TB to a USB drive correctly, and then finally have a bit-error that spoils the entire transfer and might even corrupt your drive. Happened to me several times during testing on different machines and with different cables and drives (luckily no real data was lost, but USB is now sort of banned from my office for use with external harddrives). Any sane protocol would be able to deal with such a low error rate, and finish a data transfer without problems.


Use a filesystem that does end-to-end error detection and correction. I use ZFS on all of my USB drives (even "real" SATA and M.2 SSDs via adaptors) and it frequently finds checksum errors (and thanks to the design of ZFS, is able to completely recover without any risk of further corruption).


Isn't any filesystem that runs on top of a SSD or flash drive at risk from the SSD/flash drive controller corrupting the disk or making it inaccessible, when the volume is not cleanly unmounted?

I can remember losing a USB drive that way, even though it was using a journaling filesystem.


Potentially yes, realistically no. This is the old "if the storage controller is lying about flushing to disk, there's not a lot you can do about it" problem.

ZFS does have defensive mechanisms, like doing a read after write to try and be sure that what is written is actually committed to disk, but if the storage controller chooses to serve that out of cache then that could be a lie too. It's the old "trusting trust" predicament, there's no way for hardware higher in the stack to prove that lower levels aren't simply a tower of lies, only instead of viruses it's flush.

That said, in practice very little hardware is actually a giant tower of lies. Flash drives typically do not have enough of a controller to actually cache anything, thus no real capability to lie about writes/etc. SSDs do, but they also generally obey the expected behavior around flushing actually flushing and not just lying about it.

RAID controllers and similar are the danger zone, because they may have cache and may lie to the CPU about it on the assumption that it actually will eventually be flushed, and that's the dangerous thing.


More importantly ZFS by design ensures that if you unplug early and lose the last few writes, you'll end up in an earlier consistent state.


I think the one you are replying to understands this, but the point is that "the last few writes" may be executed in different order by the underlying hardware and this might confuse the filesystem.


The question remains: how did I lose an entire ext4 volume after pulling the USB drive from the machine without unmounting it.

Is anyone testing for these kinds of things?


That's great if you only use these drives on your big computers. Sadly if you do want a "universally" portable drive – usable on an Android phone and an iPad and whatnot – you're stuck with horrible basic filesystems like exfat :(


This doesnt actually happen (single bit errors getting thru). USB 3.1 data is protected with https://en.wikipedia.org/wiki/Cyclic_redundancy_check#CRC-32... , same CRC as the one used in Ethernet.

https://www.rroij.com/open-access/implementation-of-link-lay...

USB 3 uses CRC-16, still more than good enough for single bit flips.


Interesting. Perhaps it was more than one bit-flip then ...

Anyway, it still sucks that I couldn't find the exact reason for the error in the logs.


Are you sure it's a USB problem and not something else? Did you reduce all possible variables - ie how scientific were you in determining USB was the problem and not something else?


I think I tested quite thoroughly on very recent Linux versions with Rsync. Of course it is difficult to pinpoint the real culprit with certainty, but USB has no error correction, and that makes it the prime suspect here.


Another bad thing about it is that there is no good way to debug it (afaik).


Bits to bytes means dividing by 8.


I'm old enough to remember parity bits and FEC, so I know that the number of bits per data byte might be 8, but the number of bits required to transmit a single byte might be more than 8, and the number of bytes needed to transmit a frame of data might also be higher than the number of bytes of payload within the frame.

I know next to nothing about modern serial protocols, but nevertheless as a rule of thumb, and absent any other information, I tend to use 10 bits per byte when converting bits per second to bytes per second.

Saves disappointment if nothing else.


You're pretty much on the dot: what you're thinking of is 8b/10b encoding (https://en.wikipedia.org/wiki/8b/10b_encoding) and it's exactly what USB uses historically.

Recent standard revision (USB 3.1 Gen 2, as well as USB 3.2 Gen 2x2) switch to a more efficient 128b/132b encoding (https://en.wikipedia.org/wiki/64b/66b_encoding), with ~3% encoding overhead rather than the historical 25.


Just to add for clarity, b is used for bits and B for bytes. Though they are often mixed up.

(... and a byte is not always 8 bits)


I believe the byte changed its meaning like many words did in the past. To a lot of younger developers a byte is synonymous with an octet.


A byte has been synonymous with an octet for as long as I can remember. I found a Wikipedia article [1] that describes the transition of a byte meaning the number of bits to encode character of text to an octet.

[1] https://en.wikipedia.org/wiki/Units_of_information


It does depend on context though. Transports often use 8b/10b, which is why SATA 6Gb/s is also 600 MB/s.

https://en.wikipedia.org/wiki/8b/10b_encoding

A byte is also referred to the smallest addressable unit. Which in most cases is 8 bits but many (I guess mostly niche architectures) have much larger characters/bytes, and is still very much relevant.


I love your comments! Always learning something new. (this time about SATA)


In C it’s not when CHAR_BIT != 8.


I just had to google "is byte always 8 bits" so see that in historical context it wasn't always the case.

But probably for all practical purposes in modern era, we can safely assume it's always 8 bits.


In the CS tests for every UK exam board, a byte is always 8 bits


Unless you are a disk drive vendor, in which case you insist it's 10.


Why? Assuming a byte being 10 bits instead of 8 bits would actually make the drive capacity lower in terms of bytes (of course not physically but for marketing), which a vendor probably doesn't want.


I am not sure what the op is talking about with bytes, but it could be powers of 2 v. powers of ten for kb/mb/gb.


Yeah, that's the kibibytes vs kilobytes / gibibytes vs gigabytes diffence... Which adds up to ~7% difference in capacity when talking about GiB/GB, and even 12.5% when talking about PiB/PB

Kibibyte (KiB)

1,024¹ = 1,024

Mebibyte (MiB)

1,024² = 1,048,576

Gibibyte (GiB)

1,024³ = 1,073,741,824

Tebibyte (TiB)

1,024⁴ = 1,099,511,627,776

Pebibyte (PiB)

1,024⁵ = 1,125,899,906,842,62




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: