Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm trying to relate this back to computer systems if its even possible, say comparing it to scheduling block requests from multiple processes to a block device, if the LIFO discipline maximizes welfare, I assume welfare in such system would be average response time, where response time here = queue time + service time of the block requests. When the block device is saturated and starts queuing, I guess one benefit would be that the block requests with the smallest waiting time would be served first, and improve responsiveness, but unless some kind of deadline is added you might have long tail where certain block requests don't get to be serviced.

But since the paper assume there's an opening time, perhaps then is not applicable for the block device example I gave above, maybe a more comparable example would be a traffic spike to a web application after some announcement, and how an http framework/library might 'choose' http requests to service. My understanding is that most framework/libraries just implicitly delegate to the OS process scheduler.



Also for those who enjoyed this paper, I think you will also like CMU's Prof. Mor Harchol-Balter's book[1] on queuing theory applied for designing, and analyzing performance of, computer systems.

Personally, I don't know of any other book that covers so well the application of queuing theory to computer systems.

[1] http://www.cs.cmu.edu/~harchol/PerformanceModeling/book.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: