Hacker News new | past | comments | ask | show | jobs | submit login
Ignore the Kingman at your own peril (taborsky.cz)
30 points by mrsnuggles on June 22, 2021 | hide | past | favorite | 10 comments



This would be a nice working theory but it is not really supported by the paper the author references. Not least, this line in the paper seems to contrast with the reality of IT teams in my experience.

> That fact that an exponential distribution arises in this case is a reflexion of the fact that the queue discipline is of a ' first come, first served' type.

In my experience, teams rarely work on a FIFO style queue. They are constantly triaging.

This model would be more applicable to a fast-food restaurant.

Edit to add: I'm not actually sure how the author gets from the paper to the formula presented at the start of the article. My background is not in math so perhaps there's a missing link I'm not aware of. It would be helpful if that was explained.


It looks like the author simply linked to the wrong paper. The linked note is The Single Server Queue in Heavy Traffic by Kingman (1960). However, Kingsman's formula was actually defined in On Queues in Heavy Traffic by Kingman (1962)[0]. I cannot find a copy of the latter but based on the preview they appear to be different (but related) works.

[0] https://www.jstor.org/stable/2984229

[1] https://en.wikipedia.org/wiki/Kingman%27s_formula


Constantly triaging to make sure that the things people care about are at the top of the backlog, yes, but in my experience generally not deleting tickets that are "not right now".

What that means is that while the wait time for a hot ticket can be small, there will be other tickets that wait years to be looked at. Overall the average wait time curve might not be quite exponential, but it'll look similar.


> teams rarely work on a FIFO style queue. They are constantly triaging

True, but when you're busy enough, "constantly triaging" itself adds to the workload.


I interpreted it more as a symptom of Agile planning. Marketing wants it right now, but it might actually go into the backlog because we're already in our sprint. Then 2 weeks later you might have more important things to deal with so it gets lower in the JIRA pile. etc etc


Off-topic to most of the points the article is making, but I'm happy to finally have some mathematical basis to my observation that saturating IO on a Linux system slows it down exponentially (or at least very non-linearly).


This makes me want o throw the computer out of the window. Copying a large file between two partitions on the same SSD makes the computer absolutely unusable - mouse freezing, sound freezing


I've found that setting resource limits via cgroups for the whole session makes the slowdowns much more manageable. It's definitely not ideal, I'd prefer some more elaborate system where each launched app was able to use only up to 90% of system resources, with some 5% always reserved for things like Xorg, the WM and such (and that's doable, I just didn't bother), but it does work.


That brought those memories from ROTC - the military has the talent for making anything mind numbing, and they ruined for me couple things they were big on - dynamic programming and the queueing theory ("parallel service theory" with the "servers" in these calculations being the forces and weapon systems).


Looks very similar to Little’s Law[1] which is another broadly useful queuing theory result.

[1] https://en.m.wikipedia.org/wiki/Little's_law




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: