Isn't hyperconverged just when you put compute/storage/network/whatever in the same box(es) and virtualize it all together? There's no inherent scale to that, it's just a structural thing.
Why is it important that storage/compute be in the same box? Isn't it desirable for there to be a "service boundary" between block storage, file storage, object storage, etc, and the consumers?
If that boundary is desirable, why would it matter where the storage resources are located? Wouldn't that be an implementation detail?
Hyperconverged is all about software defined storage and compute. You can create those service boundaries all on one cluster and pool like nodes together to create one big mesh of compute and storage. The precursor were things like EMC and NetApp storage clusters which typically had 2-4 "compute nodes" with a rack full of direct attached storage. This created a massive problem come upgrade time, and the term was called "forklift upgrade" [1] for a reason. With HCI you can add and replace single boxes as needed.
Also, latency is not an implementation detail for a lot of data-intensive workloads. We're talking differences of 1-10ms latency for Network Storage and 250-500µs for a local NVME SSD.
Yeah that happens too much. Just marketing. When MDM (mobile device management) expanded to also include Windows and Mac most vendors loved branding it as UEM (unified endpoint management) as if it's a totally new thing and theirs is so much better than the competitors. But it's the same stuff just applied more widely and even vendors still referring to MDM are supporting these platforms.
I hate it when marketing people through up hot air like this.