Hacker News new | past | comments | ask | show | jobs | submit login
Why Ruby App Servers Break on MacOS High Sierra (phusion.nl)
259 points by mef on Oct 19, 2017 | hide | past | web | favorite | 127 comments

Greg Parker, who works on the Objective-C runtime, has a blog post that goes into more detail: http://www.sealiesoftware.com/blog/archive/2017/6/5/Objectiv...

So, basically Objective-C doesn't like the RAII pattern if the resource is a process? Ouch.

I got the gist of his summary but his writing is a bit... awkward. Is English not his first language?

C has this same problem. Most of the C standard library is unsafe to use between fork and exec of a multithreaded program. This usually includes malloc and printf.

How so? After fork() you are guaranteed to only have one remaining thread in the child process, and the parent process doesn't care if there was a fork() or not.

malloc() and printf(), for example, have mutexes internal to their implementation. Suppose the parent process has two threads. One is about to do a fork, and the other is in the middle of a malloc(). The fork occurs. The parent process continues on as normal. In the child process, there is only one thread -- a clone of the forking thread, but the memory state of the child process is a full clone of the parent (except for the return value of fork(), of course).

The single thread in the child calls malloc(). But the malloc mutex is already held, because in the parent process, a thread was executing there. Unfortunately, since that thread does not exist in the child, it will never be able to release the mutex. The thread in the child is deadlocked.

Well, I haven't thought about locked mutexes. It makes sense.

> Is English not his first language?

As far as I know it is.

This issue has been addressed on ruby-head


Right; this article is almost a week old at this point and some progress has been made in addressing this. In particular, I found the characterization that other app servers are not taking action on this problem to be a lie in poor taste.

This kind of talking up of one's own product and saying "we're first; we're best!" rather than working within the ecosystem to help fix a shared issue confirms my feeling that Passenger is a bit of an outlier in the Ruby community.

It's been addressed, but it's a hack. To be safe, apps must stop all other threads before forking.

The only correct fix I can see is to have pre- and post- fork hooks in every library, which make that guarantee.

In Puma, we emit a warning if we detect any additional threads running prior to fork.

> This cryptic error

Well....I don't think I've seen many programmer-to-programmer errors that are less cryptic than the one described in the article. It's actually quite amazing how much explanation you sometimes get from Cocoa!

It's only cryptic if you don't read it, right?!?

I think it's mostly unclear to non-Apple developers.

Python suffers from the same issue but the response there has largely been to stop using modules that use objc.

I was thinking about Python when reading this, the real culprit is not objc but the third-party modules.

If I understand correctly, this problem is caused when you call initialize in another thread, and while that is running, you fork.

It affects these app servers because this is triggered at "require" (or "import") time. This is madness, no module should run code just when you import it. OK, I have been guilty of that, too. But if absolutely neccessary, keep it to a minimum. It breaks all kind of things when module imports are not side-effect-free.

Launching a thread and (indirectly) taking a mutex is definitely not something you should do on module import!

As far as I understood, the module isn't launching threads at import time. On the contrary, the problem is that since pg is linked against macOS's Kerberos and LDAP frameworks, as soon as the Objective-C runtime realizes that NSPlaceholderDictionary initialize method was called after fork(), it crashes:

One of the rules that Apple defined is that you may not call Foundation class initializers after forking.

In that case, the new behavior doesn't make sense. You could call `fork` as the very first thing in a program, and then do all the stuff in the subprocess, and I wouldn't expect it to make a big difference.

Maybe there is an internal `mark_exec_as_called()` function that you could exploit, if you are forking without exec...

Well, something must be spawning a thread first. From the Obj-C source¹, the comment on performForkChildInitialize() (the point where this crash happens) says

> Exception: processes that are single-threaded when fork() is called have no restrictions on +initialize in the child. Examples: sshd and httpd.

¹ https://opensource.apple.com/source/objc4/objc4-723/runtime/...

This is so incredibly Apple :)

The breakage, I mean. To clarify a bit, for better or for worse, this is what Microsoft does, totally different psychology: https://blogs.msdn.microsoft.com/oldnewthing/20031223-00/?p=...

So Unix people had this function called `gets` that was defined like this:

    char *gets(char *at);
In the early days, if someone wanted a string you could do:

    x = gets(sbrk(0));brk(x + strlen(x) + 1);
And this is perfectly safe, but it is perhaps the only safe way to use `gets`. See, most people wanted to write:

    char buf[99];
    x = gets(buf);
And is this not safe because `gets` doesn't know that `buf` only has room for 99 bytes.

The API has a choice:

a) They can make it harder to do the wrong thing; make `gets` generate errors, warnings, crash if you use it, etc. This requires people fix their programs. That's what GNU did for `gets` and it's what Apple is doing here.

b) They can change the rules: It's possible to modify the ABI and require markers or a mechanism to detect where the edge of the buffer is. This requires people recompile their programs. I think Zeta-C had "fat pointers" so gets should be safe there[1]

c) They can work around: If you have enough resources, you can look at the programs using this API and figure out what their buffer sizes are and hardcode that into the API provider. Microsoft has famously done this for SimCity[2]

That's it. They can't really do anything else: The function is difficult to use right and programmers do the easiest most-obvious thing they can. oh i need to get a string, so i'll use gets... but i need two strings so....

Anyway, I'm not aware of any good rules for choosing which way to go: Less total effort seems like a good metric, but this is very hard to estimate when you live on an island and don't know how other people use your software.

Memory corruption is serious though: It's often very easy to turn into a security hole, so I generally advocate doing something. All of the people just disabling this security feature make me nervous. I wonder how many of them run websites that I use...

[1]: http://www.bitsavers.org/bits/TI/Explorer/zeta-c/

[2]: https://news.ycombinator.com/item?id=2281932

Wow, somebody remembers Zeta-C! Yes, you're right, 'gets' was safe in Zeta-C. (I'm the author.)

Modern brk() can return -1 to indicate there's insufficient space above the current break address to allocate the requested additional space. I assume "in the early days" brk() could also fail to allocate space (after all the space available is not infinite) so "perfectly safe" comes with caveats.

Surely that gets(sbrk(0)) will not work since sbrk(0) returns a pointer to unmapped memory. Maybe you wanted sbrk(BUFSIZ)?

Well, no actually: That still limits you to a gets of BUFSIZ.

Just as we often grow the stack with page faults, you could grow the heap with page faults: Modern UNIX doesn't because this is another one of those "almost certainly a mistake", but:

    signal(SIGSEGV, grow);
    void grow(int _) { sbrk(PAGESZ); }
should work.

fork() is not gets and you don't f*ck with process creation. If a programmer commits an error that deadlocks or corrupts data by using fork() unsafely they really should have known what they were doing in the first place.

I honestly prefer an early crash to random memory corruption

Apple isn’t breaking old/existing binaries, the new behavior only applies to binaries compiled against the 10.13 SDK.

is that so much better? I dread every new release of macos because it always breaks some part of my toolchain (either gcc and friends, or valgrind or whatever). It wasn't so bad when a new os came every few years. now they break stuff every year. six months since I bought a new machine and my gdb still isn't working quite right.

Apple is not responsible for every piece of third party software, especially niche tools like gcc, gdb and valgrind. It is up to the maintainers of those tools to keep up with new macOS releases. Beta releases of macOS are made available early to developers for exactly this reason.

> It is up to the maintainers of those tools to keep up with new macOS releases.

Why should stable tools which has been working for decades constantly have to change their source-code to keep working on Apple's OS, a piece of software whose primary task is running software the user wants to run no less?

Apple should make sure the user's software runs on its OS, not the other way around. Stop apologizing for what is essentially the world's wealthiest corporation being lazy and putting the maintenance cost of its own OS on everyone else.

backwards compatibility has a high cost over the long term - if you're not supporting it, you're free to make your software better at your own whim. That's a strong postition to be in.

I would say the primary purpose of macos isn't to cater for everyone, it's to make using a computer a pleasurable experience for the non technical. It's about bringing computing to the masses.

If you're in a mahochist niche, good for you but you're better off on linux.

For me, macOs is pleasurable for the most part, and i'm technical enough to work around any problems I face. On windows/linux I have to do that for features that aren't even nearly advanced.

EDIT: It would be great if downvoters took the time to indicate which part specifically they think is wrong. I've edited the post to remove unnecessary antagonizing parts. Remember that this is merely my opinion.

> features that aren't even nearly advanced

Windows, while not a UNIX basis (which can be an actual problem for technically-inclined users, eg developers) is vastly more advanced that macOS, especially for business users. Generally for maintream users, Windows or macOS are both intuitive after enough time spent using them; what's putting off is to switch from one to the other (eg shortcuts, etc.) Windows management (I mean the app's windows, not the OS) is notoriously bad on OSX these days compared to most other Desktop Environments.

Likewise for Linux, certainly the least user-friendly but second to none to perform advanced stuff (eg Deep Learning, or 'edgy' virtualization for instance involving GPU pass-through).

Don't get this the wrong way, macOS is OK for the masses, but on par with Windows 10 nowadays.

This is a personal opinion based on using all three OS daily, and from observing friends/family using any of them. As long as you make the right choice, there's no 'better' OS, just different pros and cons that suit each user more or less.

Most notably, I now have to troubleshoot my mother's workflow on OSX/iOS (simple stuff, mostly related to printing and sharing pictures/scans), it wasn't so a few years ago. As of 2017 I personally have a much simpler experience out of the box on Android+Windows.

You’re getting downvotes because you’re making statements that

a) show complete ignorance about modern OS design & technology and

b) conflating how “advanced” an OS is with you & your mother’s UI preferences.

I am sorry but I fail to understand your arguments. I assure you that I am humbly trying to understand and would like this digression to move past sarcasm. I'm guilty of the first strike in that regard, but I feel it's getting in the way of the discussion at this point.

a) I am willing to accept your statement (I am no OS expert, I have no formal CS training; I'm just a developer of rather high-level software and I've only been tinkering at home with computers for a short couple of decades); however please note that I was merely opposing the parent post's implication that macOS is the most advanced OS.

I suppose this is perhaps a matter of perspective: I define and judge "advanced" here not from a CS standpoint but rather from a real-world pragmatic standpoint: does it do the job, for whom, and how well? I observe that macOS isn't dominant in business nor in server rooms of any kind, and that Linux is pretty much the only relevant solution for most cutting-edge computing projects. Please help me understand how macOS has more "advanced features", as stated by the poster I was replying to. I sincerely fail to see what macOS has on Linux or Windows nowadays. I, for one, can't do anything better on it.

b) I see your argument as slightly derogatory, but let's move past that. Surely you understood that using anecdotal arguments, implying my mother of all users (!), had the evident purpose of downplaying my opinion to just that: an opinion, not a scientific judgment about the advancement of an OS; thereby implying that the parent post I was replying to had no more grounds than mere subjective opinions to make its statements. At least, none that I could find. There is no conflating of anything, but perhaps that was due to bad wording on my part, in which case I understand the negative reaction (but stand by my opinions, I vastly prefer Windows 10 UX to macOS as of 2017, and I should perhaps add that I was a 100% mac user from 2008 to 2016, at the notable exception of casual gaming which I've quit since then and does not even factor in my current opinion).

I'd gladly hear answers about the respective advancement of each major desktop OS because I'm truly interested in the matter, if only from a dev perspective (and obviously as a consumer/user).

> I define and judge "advanced" here not from a CS standpoint but rather from a real-world pragmatic standpoint

Which is completely subjective and also not what “advanced” is usually used in reference to when it comes to OSes. Perhaps you meant “intuitive”?

Either way it’s subjective so you could have just distilled both your posts (and points) to this:

“Personally, I don’t like it.”

That’s fine. You do you. No harm no foul. Would have saved everyone the essays & you typing them.

Related: you... overwrite. You’re incredible verbose for the amount of data you’re delivering. That can come across as patronizing or condescending. To use a $5 word: you bloviate.

I don’t say this to belittle; it’s merely feedback & trying to help. Tone can be hard in text.

I upvoted you for pointing out a relevant problem in my communication. I agree and do/will try to improve. Thanks for the feedback.

(In this case I wanted to be as formal as possible to convey respect. English isn't my mother tongue so I may tend to overdo it).

c) complaining about downvotes

Ha, obviously. : ) I know that. But I think I was careful enough to word my edit as "please explain"; nowhere do I complain about the downvoting itself. I accept it, period. I an genuinely interested to know which part is flawed in my opinion.

It never helps does it?

When shit breaks in the new version, people should stop upgrading to the latest macos!

Apple has found that they don't need to put effort into api backwards compatibility because people are still freaking upgrading. It costs money to ensure backwards compatibility, so not doing it is desirable. The end user has to put pressure on Apple to do is.

You missed the fact that fixing problems and not stacking technical debt for the sake of "backwards compatibility" is also desirable.

Any particular reason you decided to quote backwards compatibility, as if it's not a real thing?

Quotes are not only used to denote non-real things -- if anything that's a quite recent (couple decades) trend.


Well, with the advent of irony it really took off.

Developers need to support Apple as much as they deserve, and really do they deserve much?

From Hardware i.e. the Mac Pro (Not innovative? How about just make one workstation for Video Production that is valuable) and the Touch Bar. Software, Final Cut Pro the past 4 years and the crashes I see in my apps. Then Mac OS' move to iOS touch gestures on touch pad but I use it in dock mode drive me crazy. Mac OS and OS X thing that was just incredible is the OPTIONAL case sensitivity. If you haven't had to work with Adobe products on Mac this might not be such a big deal but ugh.

My mind blows every time I have to use MacOS for some job. Sure they have a optional case sensitive command line that works almost like a Unix (See optional case sensitive in 2017), but it really just is a mess unless your workflow is like everyone else's. No flexibility if your working with a team.

You have to break things occasionally or you end up in backwards-compatibility hell where every bug is a feature that is absolutely essential to somebody's workflow.


Well, it depends on your point of view.

Microsoft, and Linux (just the kernel I'm talking about here), have both decided the point of an OS is to run programs, so with each update they both heroic efforts to not break userspace, often adding code just to make sure old programs that did weird things don't break.

Apple have decided to go a different route, and leave a trail of programs just a few years old that are forever unrunnable, as they won't even distribute old copies of their OSes. However, it seems many users are willing to take that choice, as we can see from their success.

> they won't even distribute old copies of their OSes

They do, actually; if you bought a copy it should be available in the App Store. They even sell physical installers for old operating systems: https://www.apple.com/shop/product/MC573Z/A/mac-os-x-106-sno...

That was the policy for several years, but this year they changed it - the Sierra installer is no longer in the App Store after you upgrade.

gcc isn't exactly a "niche tool".

It isn't the official compiler on Apple platforms either, that ship has sailed long time ago.

Google has also shown the door to it on their OSes.

Lovers of BSD licenses will miss FSF software in a couple of years.

The really sad part is that as their freedom decreases they seem to just accept it as a new normal rather than as a cost that they chose to pay, and the people who know or even sometimes just remember things differently end up ignored as if they were ranting about some crazy impossibility.

Let's take a look at an alternate reality where Clang didn't exist, and GCC was the only compiler worth using–wait, we don't have to do that. Just look at Bash. It's neglected and stuck on 3.2.57 forever because Apple doesn't want to deal with GPLv3. Do you prefer that to the BSD-licensed solution that LLVM is, where it's still open source and actively maintained, rather than left by the wayside because Apple just refused to play nice?

Bash is doing just fine; it's not stuck anywhere (4.4.12 on my machine). Mac users are neglected and stuck, but Bash isn't.

> Do you prefer that to the BSD-licensed solution that LLVM is, where it's still open source and actively maintained, rather than left by the wayside because Apple just refused to play nice?

I'd rather have the time, money and energy go into a free software commons. Anyone contributing to LLVM is enabling proprietary software. No thanks.

> I'd rather have the time, money and energy go into a free software commons. Anyone contributing to LLVM is enabling proprietary software.

Am I clear in understanding that you're statement is "people contributing to this piece of free software are not contributing to a free software commons"?

Because that makes no sense to me, and it's hard to avoid interpreting the dismissal as pure zealotry.

They are also contributing to proprietary software that uses their software.

Being against that, is a dogmatic approach that accepts no contributions to the cause that also help other causes because only the cause may win.

I'm not really for it, but I understand where RMS and his allies are coming from. Proprietary software makers tend to view their competitors, free or otherwise, as an existential threat, so it is "fair" for the free software folk to view them via the same lens.

The part you are overlooking are the contributions that OEMs stop doing, because thanks clang they no longer have to.

Not everyone contributes back to LLVM thanks license.

Just like the *BSD hardly see the same contribution from all those embedded devices, as Linux enjoys.

People using BSD licensed code have an incentive to contribute changes that aren't "secret sauce" to make maintaining their forks easier. Sony has pushed code they wrote for the PS4 back upstream to make keeping current with upstream easier, for example. Juniper still pushes patches upstream, and they've got a much larger team working on Junos than Sony does on Orbis.

FOSS religion is only going to fade as we get further from 90s Microsoft.

obvious troll is obvious.

specially with that username. lol. thanks for the laugh.

How many people who buy an Apple computer need to install and run a C compiler?

EDIT: BTW, it is trivial to install clang without Xcode on a pristine Mac. Just typing cc in Terminal will pop up an installer that will download and install clang (and possibly other tools)

not everything needs to be used by the end-user directly. I bet gcc is used a countless amount of times just to build OSX anyway.

Clang is used to build macOS and 99% of macOS/iOS applications, not gcc, and Apple maintains clang for you.

Ah my bad, I did not know that.

OS X uses clang.


This crosses into personal attack, which is a bannable offense on HN. Would you please read https://news.ycombinator.com/newsguidelines.html and post only civil, substantive comments from now on?

If I ruled MS or was a big boss of Raymond, I would ask him to show a big screen-dimming warning before running these programs, like “This application uses unsafe methods of interacting with your computer and works only because we decided to support it, spending two weeks to make it work. Still no safety guarantees, btw. Please ask Vendor, Inc. to read http://doclink on how to do his shit the right way.” [Okay] [Damn, FTS] [Send email for me]

No chances I make it to the top of MS though.

As he nicely explains in some other blog posts, sometimes Vendor, Inc. went out of business in the 80's but the apps are still in use. So you'd just annoy the poor dweeb from Human Resources that has to use the OS and app provided by his employer, the only job he could find in Podunk :)

It is very different but I wouldn’t argue better. It’s lead to a lot of their problems with Windows APIs

that is exactly what I dislike about Microsoft. The right answer is to detect those dodgy applications and disable them, and notify the user that the program is "bad", thus providing a severe disincentive to software authors to ever do such things again.

That mechanism also exists, notably in Vista and 7 there were a number of older applications where Windows would display a warning prior to running it that the application has known incompatibilities and may not run correctly.

However, a user who just upgraded Windows and half their programs stop working will simply not upgrade. In many cases such dodgy applications are also never fixed because the vendor doesn't exist anymore, doesn't care, or the flaw is in some library they use and they don't really want to invest that much time to change it.

Besides, detecting such behaviour is pretty much the same as fixing it with a compat shim, since you have to do more or less the same work. So from MS's perspective the benefit of working around the buggy behaviour is much more prominent than trying to discipline developers by making users suffer.

Why should users pay for developers' poor choices? This is a great way to piss off people who would blame Microsoft.

There isn't a great answer to this that satisfies everyone.

Huh? That is kind of the deal isn't it - you pick your developer and if that developer makes bad choices, you get a bad product.

If we don't push back against bad practices by developers we never improve the quality of the ecosystem.

In this specific instance, the user has a choice - don't upgrade.

> the user has a choice - don't upgrade

That's how you get everyone stuck on Windows XP. There's no good choice for the user here–either you're on an old OS that's missing features or security updates, or you can't run software that used to work.

And thus we end up, over time, with an OS that is a pile of hacks upon hacks to workaround broken apps, severely limiting it's ability to provide a clean, sensible, performant set of OS services.

I regularly read The Old New Thing and just shake my head in wonder at how much more MS could have accomplished without that baggage.

I would much rather users were occasionally forced to upgrade their buggy software.

> I would much rather users were occasionally forced to upgrade their buggy software.

Yes, that's the ideal solution, but backwards compatibility overall is a huge can of worms that doesn't really have a good solution. In this case, it might not be possible to upgrade the software (e.g. it's unmaintained, or was contracted out, etc.)

Note that nowadays there aren't many of such compat hacks in the actual Windows codebase. Most applications can be coaxed to work by just shimming a few API calls and those shims can be kept separately and don't impact other software that doesn't need them. The simplest one of those would be the one that simply pretends to be an older Windows version for broken version checks to still work. There are others where HeapAlloc simply allocates a little more to prevent a buffer overflow, etc.

The OS architecture no longer suffers from such things.

That’s a very rosy outlook that doesn’t match with the few Windows devs I’ve spoken to.

It’s better now - because they moved to abstracting a lot of that stuff as you said - but the number of man hours spent on it and the knock on effects of the overall system design is non-trivial.

Plus, if they’d not had that policy they could have been where they are now in the early 2000s

Kind of like OSX was.

I think you fail to understand just how involved, expensive, and business critical some of this dodgy software actually is. Some of this crap software costs millions of dollars and takes years to upgrade.

Ignoring even the security implications, you can't simply not upgrade either as software doesn't exist in a vacuum. All the other software/hardware involved might need newer versions or bug fixes. Microsoft doesn't even support newer Intel chipsets in versions of Windows older than 10 -- that's only even possible because 10 is highly compatible with older versions of their OS.

Apple does this too. Run "strings" on various libraries in Apple's OSes and you will find plenty of instances of bundle identifiers of non-Apple apps.

Interesting discussion. If these were user-space threads, like FreeBSD ~20 years ago, there'd be no problem. When fork() is called, the whole user-space threads package would be forked, along with all the threads.

So the obvious question is whether it's fundamental that with kernel threads the fork() system call doesn't clone all the other threads in the process? Yes, that's not how it's done, but could Apple choose to implement a new fork_all() system call? I imagine it wouldn't be easy - you'd need to pause all the running threads while you copied state, but is there a reason it's actually not possible?

Is this what you want most of the time?

If you're just going fork() && exec() then why would you copy all that state just to run some subprogram?

Is that what you want any of the time?

This prefork implementation is silly. Either do prefork after initialisation so you can take the benefits of COW, or don't bother: Just use SO_REUSEPORT and run 10 copies of your server. This distributes the TCP traffic and provides you an excellent way to upgrade-in-place (just roll up the new version as you roll-down old ones).

I agree - most of the time, the current fork() implementation is what you want, so it's the right choice. The conventional wisdom is that you shouldn't spawn threads and then fork(), and so long as you stick to this, you're fine.

Today though, everyone loves threads. It's hard to know precisely what happens in some low-level library, so it's hard to be sure that you don't have any threads left running after initialization. This is the subject of the OP.

My question though is broader - if we chose, could we add a version of fork() that did clone all the threads? I'm not entirely sure what it would be used for, but I'm sure there would be uses. Likely some of those would be for increased security, as processes provide stronger isolation.

> I'm not entirely sure what it would be used for, but I'm sure there would be uses.

I suspect strongly there aren't.

One of the biggest problems you have to contend with is around mutexes (and the like) and their state. If you copy them, we double-book an external-resource (like storage or something), and if you don't we almost certainly deadlock.

Another is any thread waiting on a resource (like reading a file descriptor) or writing to the disk (do you write twice?) and so on.

because that what everyone that uses fork want. memory access. otherwise they would use posix_spawn or something

Thinking about it some more, one problem would be what to do with threads that have just called a system call and are waiting for a response. To have both parent and child process system calls complete without errors would require cloning the kernel state, and that's not trivial. It some cases, there's probably no good answer, short of causing one of the two system calls to return an error, and that in itself would confuse a lot of user-space software. I suppose you could return something similar to EAGAIN, but you'd be dependent on user-space handling that cleanly. Maybe you could hide this under libc?

So this is a worse-is-better design decision driven by a previous worse-is-better design decision. If system calls were interruptible, then the problem of threads in system calls would go away.

EINTR? I suspect some calls are defined to never return it, but it may be just de facto, not strict rule.

Edit: simply signalling all threads with SIGFORK, waiting for a handler to return and freezing a thread at SA_RESTART point feels right way to do it. Why did POSIX decide to fork() only one thread by default at all?

Because the alternatives are worse.

Solaris has forkall(2) and forkallx(2) system calls.

Do developers just call whatever function seems to work without reading the docs? It doesn't work for low level programming.

  ~ man fork
CAVEATS: There are limits to what you can do in the child process. To be totally safe you should restrict yourself to only executing async-signal safe operations until such time as one of the exec functions is called. All APIs, including global data symbols, in any framework or library should be assumed to be unsafe after a fork() unless explicitly documented...

My suspicion is that this behavior is left over from early implementations of Ruby where threading wasn’t as common and so wasn’t as large of a concern.

It's silly to point at this documentation as if this is something that people will read in this case. People, or specifically web developers are not calling fork(). What they're doing is using supported language, a webapp server, using a database access library, and enabling the preloading option. As seen in the bug report in Ruby, this comes down to coordination between those projects. Each one is correct on their own. Putting them together as a web developer shouldn't involve analysing syscalls you're going to make either.

Yes. Programming by coincidence is the norm. Seems to work, must be right.

Welcome to the stackoverflow era of programming, where technical specs don't matter and frameworks are life.

This has been around since well before stack overflow

Here are a plethora of examples from the Windows 95 days: http://ptgmedia.pearsoncmg.com/images/9780321440303/samplech...

fork() fundamentally does not make sense as the de-facto method of starting a new process. Why aren't people using posix_spawn() by default?

Because that doesn't share memory.

You can share memory in other ways, via mmap or SysV shared memory.

You could imagine an implementation of the Ruby interpreter which kept all the expensive and shareable state (bytcode and class metadata, i suppose) in a shared memory segment, and used spawning to create worker processes with their own process heaps, but sharing that segment.

It would be a lot of work to implement. It would probably be easier just to get Ruby to use threads properly.

> bytcode and class metadata

There's a bit more to it than that, as programs can run code and load data prefork. Pretty important for things like large config files.

implementation of fork is defined by the POSIX standards. Apple should respect it. There are tons of apps that have been written using fork, are you going to change all of them to use posix_spawn()?

Posix standard says you can't call anything but async functions after fork() in a threaded program. Application developers should respect that.

for other fun low-level High Sierra issues, see the PostgreSQL msync() thread: https://www.postgresql.org/message-id/flat/13746.1506974083%...

Interesting. Mapping 64k at a time seems a bit excessive. If I call mmap it's to map everything I'm going to need, but maybe I am missing context.

PostgreSQL still supports 32-bit platforms, heap and index files can get large enough that it's not feasible to mmap them into a 32-bit process. As far as the specific number of 64k though, I don't know why they choose that, writing 8 pages to disk at a time (especially sequentially) doesn't really take advantage of modern hardware well.

The discussion on the Ruby core team issue tracker is also very informative: https://bugs.ruby-lang.org/issues/14009

I've hit similar issues with uwsgi in recent memory (though pre high-sierra), where an OS upgrade caused it to start segfaulting when using the `requests` lib inside CoreFoundation somewhere (though of course entirely unrelated to the new forking changes).

Maybe this? Though the resolution was to disable uwsgi proxying globally... https://stackoverflow.com/questions/35650520/uwsgi-segmentat...

What does Linux do right now?

If you fork after launching threads, your program is an unexpected state, so it may corrupt data or crash.

The same happens on macOS, it's just that Apple added a contract that you can't initialize ObjC classes after forking.

Why do people run applications servers on macOS?

Maybe they're developers running a local copy of the projects they're working on?

The underlying problem also affects Linux.

Apple is probably tired of bug reports to their software that really stem from some other place.

I've mostly seen it for managing multiple Macs in a classroom environment. Not sure I've seen it used heavily in production to run web servers though. Was curious and was able to find sites that do let you rent Macs that you remote into, interesting setup since you know exactly the hardware you're getting into.

LOL. mac osx breaks fork() to avoid state inconsistency in threaded applications. How about pthread_atfork() semantics? But ,as usual, apple heavy hands userspace and breaks things. Nothing new to see here, move on.

What you don't like the fact that apple sucks for breaking userspace (as usual) or that pthread_atfork() type approaches should be in every programmers toolbox?

POSIX has deprecated pthread_atfork because it is unworkable. In particular, atfork handlers can only call AS-safe functions, which means they're useless.

As you say except I explicitly noted '..type approaches' and '..semantics'. If a library designer does things in a way that makes you doubtful of state then don't fork. If you do fork block signals and exec. It doesn't help the race but it does help your peace of mind (i did what i could).

Apple still sucks BTW. Heavy handed nonsense. Let developers deal with the consequences of their actions.

I agree the bug should be fixed. But, why not just use docker, then run rails like its on ubuntu/linux on your mac? It miserable having windows/mac/etc specific issues.

Docker grows to >8GB. I have 13GB free, 486GB used. My free space fluctuates by as much as 20GB depending on how much RAM I'm using. (And by "I'm" I mean "Chrome.")

I've never felt the need to install docker on my local dev environment. It's great for production, and I'm sure it's great for people who can afford the disk space. But when space became tight, Docker was the first to get the axe. I haven't missed it yet.

I'm just curious how you would deploy on a container production environment if you can't test all the platform-specific issues while developing on Mac instead of using Docker to develop for a container platform. Isn't the whole point of using Docker to minimize the differences between production and development?

Deploy to a container test environment first? I hope no one is going from development -> production.

I love this geek-porn stuff, and the Phusion guys never fail to deliver it ;)

But my question is: is this really that important? I mostly use macOS for development, don't feel that the preforking model has that impact in the development cycle.

It is important for dev-prod parity. You will want to test whether your code is compatible with preforking, which you will likely use in production.

You are not affected, but there are people in the world besides yourself who do things differently than you.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact