Hacker News new | past | comments | ask | show | jobs | submit login
On C popularity vs. Python, size of stdlib, package managers in languages (flounder.online)
21 points by urlwolf 4 months ago | hide | past | favorite | 18 comments



"1. It complies on lots of architectures. Behold what Justine Tuney is doing :) Portable C is actually real!@

There is no other language that runs on as many platforms as C, that much is true. If Justine Tunney's Portable C, as much as an accomplished as it is, is a practical solution is a different question.

"2. You don't have to audit code from a lib, nor trust it by default"

Now you have to audit the LLM generated code - all of it - and you have gained exactly nothing. It's even worse, because you can assume your library code has been tried and tested by others even if you don't trust them whereas you LLM generated code is fresh and new and probably full of quirks and missing corner cases.

"3. You don't have to fear that updating packages in your system (or virtual env, or container) would break your 'batteries included' code"

True, but you don't benefit from updates that fix bugs and security vulnerabilities that others have found for you.

"4. You can do formal verification (this I know nothing about, just learning now on your prompting!)"

You can, but how practical is it? Have you ever done it?

"5. It's far easier to deploy"

This is a valid point but I also think it depends a lot on your environment.


> Now you have to audit the LLM generated code - all of it - and you have gained exactly nothing. It's even worse, because you can assume your library code has been tried and tested by others

You're right to raise the first part as a concern and observing that, all things considered, you probably haven't gained anything on net. (You might actually be losing.) But you're wrong about the last part.

What we've seen in practice is that like the free-rider problem there's a dearth of folks performing the work that people are assuming everyone else is doing and a glut of people relying on that false assumption. And it's worth considering how many of those fixes are _only_ necessary because of the rampant code sharing.

Monocultures have downsides. A single vulnerability in a single package ends up exposing people to greater risk because the package is trying to be all things to all people and therefore contains functionality that only a fraction of its dependents actually wanted/needed. Because there's a monoculture, though, everyone is impacted. This is the counter to your #3: when you aren't following the same path others are on, you don't get their fixes, but you also aren't impacted by bugs that others are introducing in the first place. Anyone who's ever stared at server logs for an application that isn't Wordpress can see this principle in play when you consider how much protection simply not being on Wordpress provides for thwarting would-be attackers.


Aren’t most security vulnerabilities in C or C++ code? Seems a bit rich to point to C’s write-your-own approach as being a security benefit. I’d want to see some data on that before I arrive at that conclusion.


> Aren’t most security vulnerabilities in C or C++ code?

I haven't seen any data that supports that idea, but I'm thinking it would depend on how you define "most". Do you mean in absolute terms? Then sure, because most code in use is C or C++ code. In relative terms? I have no idea.

But it's become fashionable to shit all over C these days as if C itself is a security vulnerability. I reject that idea. C certainly makes it easy to write insecure code. It's the flip side of the strengths of the language. However, there's nothing about C that prevents writing secure code in it (with the caveat that no code, regardless of language, can be considered 100% secure). That's done all the time.


> However, there's nothing about C that prevents writing secure code in it

C doesn't prevent you from writing secure code, but it sure as hell makes it hard.

I believe it was Bryan Cantrill who made the problematic observation that the main issue is that C code doesn't compose.

I can write a perfectly correct library. You can write a perfectly correct library. When somebody else brings those two correct libraries together, though, the result can be terribly broken.

This is where the GC languages and Ada and Rust kick C and C++ asses.


There is this reasonably small C library used by literally the entire world that you may want to look into, the Linux kernel.

You may also note that in the general case even your beloved Rust and GC languages need to drop down to C the moment they want to interoperate with anything else.


> There is this reasonably small C library used by literally the entire world that you may want to look into, the Linux kernel.

WTF? The Linux kernel is gigantic. And is a prime example of non-composable fail.

Most functions fail in bizarre ways if you have re-entrancy. Out of memory is handled in myriad different ways--if it's handled at all. System calls can fail in a zillion ways with very little ability to recover correctly. I can go on and on.


It was sarcasm...

Also the reason something can fail for any reason whatsoever is because the linux kernel cannot just decide to shut down your computer if a cosmic ray flips a bit while the cpu is reading from ram.

The reason you can write your small library is because of the work done.

Also the reason for my comment in general was the linux kernel is, in my opinion, the most used library on the planet and was and still is predominantly built on C.

Sometimes pretty APIs are not what you need. You need APIs that have been and will continue to do what needs to be done for decades


> However, there's nothing about C that prevents writing secure code in it

But it makes it exceptionally hard! Personally, I always try to use C++ over C for that reason (and many others). I also understand why some people prefer Rust.


> But it makes it exceptionally hard!

I suppose that depends on what you mean by "hard". I don't think it makes it exceptionally hard at all, but it does mean you have to actually think about the design decisions you're making. I also consider that to be a good thing -- we should be carefully considering our design decisions.


> I suppose that depends on what you mean by "hard"

Compared to all other languages (except for assembly :)

> but it does mean you have to actually think about the design decisions you're making.

I don't believe that. How does thinking about passing the correct number of bytes to realloc() or memmove() help you think about design decisions? It rather takes away mental energy for no apparant gain.


The "C is a shark" is really apt, I like it. Keeping that one for future use.

"Write it yourself" ... I'm not so sure. It will come back in a limited way; stuff like leftpad is obviously stupid. But chatGPT is not good at tackling very large problems.

I think GPT require both breadth (large context size) and depth (smarts) to create good programs, organized well, with consistent behavior. Some code is very local. It's easy to write yourself because it touches only a small bit around it. Competitive programming problems are like this. GPT is good at those. The jury is still out on its ability to do anything further.

But a lot of code is far more global. It takes the work of a skilled programmer to take all of the small local pieces under an API and organize them to produce a cohesive whole. This large-scale organization of software is and always will be excruciating in a language like C.

C is a shark, yes. A shark is a hundred million years old; it's fast, stealthy, and sleeps with one eye open. But try getting 20 sharks to collaborate in a hunt, like orcas do. It's simply impossible.


I think the author is right that if writing small pieces of straightforward functionality is easier with coding assistants, then on the margin, people will use less dependencies.

I definitely think C is the wrong language to do it in. A strong type system is an amazing counterbalance to LLM generated mistakes.


The idea of GPT or code generators supplanting the need for libraries and languages with large standard libraries is likely jumping ahead with optimism.

But … I think the idea is relevant and, security/QA issues aside, this is a real hard-to-see shift that might be brought on by AI: shifting of the equilibria and practicalities around what parts of the craft and profession are worth caring about and which are best left to the computer/AI.

Dependencies vs “write it yourself” (with an AI). Syntax/API design for readability/comprehension vs for computer/AI parse-ability and thoroughness. Rewrite something to be better vs train an AI on its usage and move on.


> [Packages] Then the cracks started to show. A security vulnerability here. […]

> But C? You can actually perhaps write a solution in C as fast as one in python 'batteries included' now just because of those code generators.

… sure, it might be fast and easy to pull some code out of <AI/ML of the day>. But the AI doesn't do security fixes for you. And it's trained on people's imperfect code, potentially including vulnerabilities.

Whether this is a winning trade…


On modern "language package managers": it's better to think of them in terms of what they are—orthogonal version control systems. You start with your main, base-level VCS (most commonly Git these days, and therefore not just a VCS, but a DVCS), and then you have this secondary hack of a version control system layered on top—or rather, interwoven directly into the source tree itself. With these secondary SCMs, you're manually writing the VCS metadata (version strings) using your text editor into a file or files that live right beside your source code (instead of a hidden directory that your project is completely agnostic to, like .git). This only exists because people don't really want to completely buy in to distributed version control after all (they still prefer to defer to the network for things they think "shouldn't be in version control[sic]") and in some instances they also don't think the base-level VCS has a rich enough (i.e. semantic) understanding of the objects it's managing.

One of the interesting and unfortunate effects of this is that people don't notice it; because they think of these as package managers first (and as version control systems not at all), it means they're not graded against the standards that a version control system ought to be.

(Additionally, if you start talking about disregarding conventional wisdom and checking your dependencies into version control[1][2], people become totally irrational and respond as if you're insisting they stop using package managers. You can keep using your current package manager and still check the dependencies in. The only thing you're changing is that when you run `npm update` or whatever, there's no entry in .gitignore that stops Git from picking up the changes. Again, people begin responding totally irrationally if you suggest that they should just let their primary VCS track the changes, as if you've instead just told them to untether themselves from the ISS and go out on their own. Nothing about "check your dependencies into version control" requires you stop tracking upstream, or anything else—it just means to check your dependencies into version control!)

1. <https://www.forrestthewoods.com/blog/dependencies-belong-in-...>

2. <https://news.ycombinator.com/item?id=38425042>


> It will be abused by junior programmers and probably the code quality of humanity on average WILL go down. But that doesn't mean that you (as someone who has written code for decades) cannot use it as an extra pair of hands, knowing full well of the limitations.

Not just juniors, but anyone who just want less programmers in general; seniors included. If you just hit generate in the prompt and still don't understand the code yourself,

> 1. It complies on lots of architectures. Behold what Justine Tuney is doing :) Portable C is actually real!

Very portable C/C++ level security vulnerabilities all in a single binary and executed on any platform. Fantastic! /s


"In the good old days physicists repeated each other’s experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other’s programs, bugs included" — E. Dijkstra




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: