Note: Nicklaus Wirth's solution to same problem was much better: P-code. He made idealized assembler that anyone could port to any machine. His compiler and standard library targeted it. Kept all design advantages of Pascal with even more implementation simplicity than C. Got ported to something like 70 architectures/machines.
Now, for OS's. Let's start with Burroughs MCP. The Burroughs OS was written in a high-level language (ALGOL variant), supported interface checks for all function calls, bounds-checked arrays, protected the stack, had code vs data checking, used virtual memory, and so on. That's awesome and might have given hackers a fight!
Later on, MULTICS tried to make a computer as reliable as a utility with a microkernel, implementation in PL/0 to reduce language-related defects, a reverse stack to prevent overflows, no support for null-terminated strings (C's favorite), and more. It was indeed very reliable, easy to use, and seemed easy to maintain. You'd have to ask a Multician to be sure.
So, the OS's were comprehensible, used languages that made reliability/security easier, had interface/array/stack protections of various sorts, consistent design, and all kinds of features. Problem? Mainframes were expensive. The minicomputers Thompson and Ritchie had were affordable but their proprietary OS's were along lines of DOS. You can't do great language or OS architecture on a PDP-11 because it's barely a computer. It would still be useful, they thought, if it had just enough of a real language and OS to do useful work.
So, they designed a language and OS where simplicity dominated everything. They took out almost all the features that improved safety, security, and maintenance along with using a monolithic style for kernel. Even the inefficient way UNIX apps share data was influenced by hardware constraints. The hardware limitations are also why users had to look for executables in /bin or /sbin for decades: original machine ran out of space on one HD & so they mounted another for rest of executables. All that crap is still there because fixing it might break apps & require fixing them. Curious, did you think they were clever design decisions rather than "we can't do something better without running out of memory or buying a real computer so let's just (insert long-term, design, problem here)?"
The overall philosophy is described in Gabriel's Worse is Better essay:
As Gabriel noted, UNIX's simplicity, source availability, and ability to run on cheap hardware made it spread like a virus. At some point, network effects took off where there's so many people and software using it that sheer momentum just adds to that. Proprietary UNIX's, GNU, and Linus added more momentum. After much turd polishing, it's gotten pretty usable and reliable in practice while getting into all sorts of things. One look underneath it shows what it really is, though, with not much hope of it getting better in any fundamental way:
So, aside from not knowing history, there seems like there's not even a reason to debate the reason behind bad design in C and UNIX at this point aside from merits of overall UNIX architecture vs others. The weaknesses of C and UNIX were deliberately inserted into those by the authors to work around hardware limitations of their PDP-11. As those limitations disappeared, these weaknesses stayed in the system because FOSS typically won't fix apps to eliminate OS crud any quicker than IBM or Microsoft will. Countless productivity, money, and peace of mind were lost over the decades to these bad design decisions in the form of crashes or hacks.
Using a UNIX is fine if you've determined it's the best in cost-benefit analysis but let's be honest where the costs are and why they're there. For the Why, it started on a hunk of garbage. That's it. Over time, when it could be fixed, developers were just too lazy to fix it plus the apps depending on such bad decisions. They still are. So, band-aids everywhere it is! :)