Hacker News new | past | comments | ask | show | jobs | submit login

Obviously it is possible to write threaded C++ programs. And there are experienced people like you who can do it well, with enough discipline, good design, and so on. But I think the point of the article is that for most programmers it is very easy to make mistakes and shoot themselves in the foot with threads and shared memory. It could be even something like using a 3rd party library where it's initialization context can't be shared but a mistake was made and it did end up being shared by accident, say passed to some workers threads.

> What leads to problems in threaded C++ programs is unprotected, shared state.

That's one of the main problem, but I don't think people start with saying "we'll just have this unprotected shared state and hope for the best", that shared state ends up being shared by accident or as a bug. I've seen enough of those and they not fun to debug (hardware environment is slightly different, say cache sizes are bit off, to make it more likely to happen at customer's site for example, on Wednesday evening at 9pm but never during QA testing).

Other things I've seen bugs in is mixing non-blocking (select / epoll / etc) based callbacks with threads. Pretty easy to get tangled there. Throw signal handlers in and now it is very easy to end up with a spaghetti mess.

Even worse, there is a difference between "we'll start with a clean, sane threaded design from the start" vs "we'll add a bit of threading here in this corner for extra performance". That second case is much worse and can result in subtle and tricky bugs. Sometimes it is not easy to determine if the code is re-entrant in a large code-base.

Interestingly and kind of tongue in cheek someone (I think Joe Armstrong, but I maybe wrong) said to try and think about your programming environment as an operating system. It is 2017, most sane operating systems have processes with isolated heaps, startup supervision (so services can be started / stopped as groups), preemption based concurrency (processes don't have to explicitly yield) and so on. So everyone agrees that's sane and normal and say putting their latest production release on a Windows 3.1 would not be a good idea. Why do we then do it with our programming environment? A bunch of C++ threads sharing memory are bit like that Windows 3.1 environment where the word processor crashes because the calculator or a game overwrote its memory.




> So everyone agrees that's sane and normal and say putting their latest production release on a Windows 3.1 would not be a good idea. Why do we then do it with our programming environment?

Because people, convincing devs to adopt new ways is an uphill battle unless they have tried it themselves.

Joe Duffy's keynote at Rust Conf just went live, and one of the reasons of Midori's adoption failure was convincing Windows kernel devs of better ways of programming, in spite of them having Midori running in front of them.


I think the OP's attitude is still correct. Threaded programs are hard. It takes experience, discipline, and good design. There is no band-aid for that. The technology is helping with things like local storage, built in actor models, borrow checkers, etc. but those are only helping the design aspect. It still takes discipline, like knowing to not mix up async callbacks with/across threads. You are basically writing your own little kernel, and writing a kernel is hard, but once you know the rules it can be done.


I think rdtsc's point was that writing your own kernel in the present day is a terrible mistake unless you genuinely need something not available otherwise.


> So everyone agrees that's sane and normal and say putting their latest production release on a Windows 3.1 would not be a good idea.

And yet unikernels are here and used.


thank you




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: