Hacker News new | past | comments | ask | show | jobs | submit login

We're really going with a protocol that involves constant polling (rather than interrupts) as being ahead of its time? :)



The USB hardware "polls" by acting as a master. It's a master-slave protocol which is just fine for simplicity and bus utilization. The polling is abstracted from the software with the concept of hardware endpoints (mailboxes, really). How are you going to implement interrupts over a serial bus, unless it's multi-master (very complicated implementations, slow)? You don't have direct interrupt lines like in PCI.


I don't have a strong opinion on the topic (I never really bothered to look under the hood but from a purely user perspective USB has been quite serviceable for me so far) but I think you're begging the question:

>You don't have direct interrupt lines like in PCI.

Surely the people who've designed USB could have opted to add an interrupt signal if they had deemed it necessary. Sure it would've added one more signal but given that there were only 4 pins on the original USB it doesn't seem overwhelming.

That being said I agree with you that dismissing polling wholesale without digging deeper is quite silly, it can be a problem when it increases latency or wastes CPU cycles but as far as I know latency isn't much of an issue in most uses of USB (even for things like keyboards, mouses and controller it's usually dwarfed by the latency of the video pipeline) and I doubt anybody handles the USB phy in software so CPU usage shouldn't be a worry.


The whole point of USB is reduced wiring. Side channel interrupt would not have made sense.

"Polling" doesn't involves the host CPU, adds no latency. It is done by the USB controller, which sends poll packets to the device. This is all a philosophical decision by USB which is master/slave and doesn't support anything asynchronous by the slave.

The master can choose to ignore certain slaves on the bus if it desires, which isn't always possible with an asynchronous slave. Recall that many devices dangle off a single USB chain.


> it can be a problem when it increases latency or wastes CPU cycles

I think it wastes power. On a setup with interrupts, you could clock gate the host controller and if the device needs to say something it can raise the interrupt line and wake up the host. I don't think there's a way to clock gate a USB1 controller and then do e.g. wake on LAN.


PCIe does interrupts over a single serial lane just fine.


Conventional PCI can also do in-band interrupts (the mechanism is exactly same), but many devices do not support that.

The reason is that PCI is multi-master by design and PCIe uses duplex packet-switched serial links. There is no master who decides who gets to use the bus at given moment (for parallel PCI there is, but it is relatively simple circuit that is conceptually part of the bus itself, not of particular device connected to it).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: