Hacker News new | past | comments | ask | show | jobs | submit login

Not sure what this empirical analysis adds to answer the initial question. Sure, modern computers are fast and modern operating systems are designed to reduce latency.

At the same time the “best practices” of not using mutexes and malloc on real-time threads are there for a reason: They potentially trigger a system call, which will add quite some latency (as the measurements show). Because real-time is all about deterministic latency, this is undesired.




> Not sure what this empirical analysis adds to answer the initial question.

Why would you discourage someone from doing this? You thinking you already knew the answer doesn't help the OP at all.


I did not mean to discourage people from making their own evaluations. Maybe I did not word it correctly.

The article starts with questioning best practices for implementing real-time processing threads. The statistical analysis presented in the article is based on a single workload and a single machine, which in my opinion does not help to answer this question.


Oh, ok, I understand.

I agree that the empirical technique could be better, but the experiment is neat even if slightly flawed.

If the author wanted to show how much memory allocation could be done in an audio thread, they should construct an audio workload where they can tune the allocation and memory access patterns and then determine exactly how much is too much. But really that was the motivation, then as they admit in the start, they got nerd sniped and started measuring DOOM memory allocations (also fun). We should ignore the original motivation unless one uses a completely different experiment to justify something that wasn't tested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: