Do you have any more info about ROM patch slots? I have never heard of this before. I assume this is a small amount of r/w memory that is somehow overlaid over specific locations in the ROM?
address1, 4 bytes overlay data
address2, 4 bytes overlay data
etc
The data is overlayed over the specified addresses, in runtime. On some chips its 8 bytes instead of 4. On a typical Broadcom/Cypress chip you have 128 or 256 entries.
By the time the chip is 2-3 years in the market and still getting firmware updates, ~98% of them are used by existing firmware, so there are only 5-10 free entries by the time the chip is considered "obsolete".
Case in point: the Broadcom/Cypress BCM43455 chip on the raspberry pi is almost out of patch entries. Broadcom have switched to their usual tactic of stalling for years on known, reproducible bug reports.
> Case in point: the Broadcom/Cypress BCM43455 chip on the raspberry pi is almost out of patch entries. Broadcom have switched to their usual tactic of stalling for years on known, reproducible bug reports.
And it's still really buggy. I had to write a service on the RPI and the only way to reliably connect was to restart bluetooth before every attempt.
Such is the sad world of Bluetooth. The dirty secret to this industry is that this, while seeming hacky, is the bare minimum de-facto standard in most cases.
So given these data points, isn’t it reasonable for Apple to refuse to play along this broken tune and just roll out their own dialect of a wireless protocol? Why, if not in the name of scarcely affirmed “standards”, drag suppliers through an endless contractual game, when you can direct your own capacity toward the quality standards that fit you?
I don’t follow the leap. The grandparent’s point was about the quality and terrible lack of long term support of Broadcom chips. How does that translate to issues of the standard itself?
Nobody would complaining about Apple creating their own radio chips (which they seem to plan for 5G/6G). Apple creating their own standard protocols is an issue though.
Well, if the implementations of the standard are such a garbage fire what's the point chasing them? Just to check the box "standards compliant" and likely providing an abysmal UX and poor interoperability?
I fixated on Apple because they're often picked on for taking the highway, but on the other hand what's the point doing otherwise? What's a common ground if it's just a pipe dream?
Just because a chip is shitty doesn't mean it's worthless. In practice, Bluetooth is quite interoperable, and reliable enough for many use cases (especially the common, better-tested ones).
Breaking compatibility with that ecosystem out of spite is not conducive to getting adoption for a better product.
Well, the original posts report some frankly tragic scenarios - so bad that they “reboot to initialize” just to keep sane - in what are some pretty ubiquitous devices. Or not?
"Reboot to initialize" is ugly as hell and very brittle, but it's good enough for most I/O devices like keyboards or headphones. If the kernel is able to properly reinitialize the chip with all of its old association information, it might even be indistinguishable from a few hundred ms of interference. (Rebooting on errors is in fact quite common for all kinds of hardware in high-radiation environments, and is a pretty standard kernel technique for working around buggy hardware.)
Now, of course, multiple nines of uptime would be very nice to have (and open up new use-cases), but 2-3 nines is still a lot better than 0.
You're arguing that it's not enough to make a correct implementation, but that it's also important to break compatibility with incorrect implementations.
I'm arguing that a better implementation that is compatible with current protocols is strictly better than a better implementation that is not Bluetooth-compatible.
If you make a piece of hardware that is good (e.g. doesn't randomly crash and need to be rebooted by the kernel), why is it a bad thing for it to try to connect to some flaky BT headset?
Easy to say, and I can’t know for sure exactly what factored impacted Broadcom’s decisions here, but I can tell you that chip manufacturers are under extreme pressure to keep costs down, which means that they may under-spec systems at times. Also, with the long design cycles involved in chip design the patch capabilities may have been decided years in advance, before realizing how much would be needed.
In general I agree with your comment, though it’s a lot easier to say this in hindsight.