Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't say I got that feeling at all.

Smalltalk is a tight design. So is Lisp. So is Forth. So is APL. C's design just feels like a pile of... stuff. Arrays are almost but not quite the same as pointers. You can pass functions but not return them. There's a random grab-bag of control flow keywords, too many operators with their own precedence rules, and too many keywords given over to a smorgasbord of different integer types with arbitrary rules for which expressions magically upconvert to other ones.




> You can pass functions but not return them.

  #include <stdio.h>

  char *f(void) {
    return "Yes, you can...";
  }

  char *(*g(void))(void) {
    return f;
  }

  int main() {
    puts(g()());
    return 0;
  }


Technically passing and returning function pointers is very different from passing and returning functions. This has implications for things like closures.


By the fact that lmm said "You can pass functions", they probably meant "function pointers", since that's the only way you can pass in something that could be referred to as a function in C.

EDIT: At least last I heard. The example I gave is C89, and I know C99 and some GNU extensions. I don't know if a new standard introduced something else that could be referred to as a function that can be passed in but not returned.

A quick google for "c lambda" and "c closure" as well as looking for those keywords in the Wikipedia articles of C11 and C18 turned up nothing, so I guess not.


GNU c can have lambda expressions[1]. And clang does even have closures[2].

1: http://walfield.org/blog/2010/08/25/lambdas-in-c.html

2: https://en.wikipedia.org/wiki/Blocks_(C_language_extension)


Note that the first example relies on multiple GCC-only features, one of which is the atrocity that is nested functions, and even then technically invokes undefined behavior.


GNU C has "downward funarg" closures, like Pascal. They have the same representation as function pointers, which requires the compiled code to generate trampolines (little pieces of machine code that find a PC-relative environment pointer) on the stack. Invoking them requires executable stacks (a linker-level executable option often now turned off by default in major GNU/Linux distros).


> Arrays are almost but not quite the same as pointers.

There's a big misunderstanding here. Arrays and pointers are entirely different things. An array is a chunk of elements that lie next to each other in memory. A pointer is just a pointer.

The only thing that C does that causes confusion is pointer decay. If you use an array in an expression context, other than the sizeof operator, that expression evaluates to a pointer to the first element of the array.

And while it's confusing the heck of many people who never learned it properly, it's very useful. For example, string literals are arrays, too! Would you prefer to write printf(&"Hello, World\n"[0])?

That's it, in a nutshell.


C makes more sense if you read Ken Thompson's B manual.

https://www.bell-labs.com/usr/dmr/www/kbman.html


Part of me wonders if a lot of that baggage comes from C's background compared to Lisp. Or, especially when C's origins were originally to operate a PDP-8 I wonder if the goals and ideas behind each contributed to why the way they are.


FYI: 2am possible rambling on design and software

Funny, I got the same feeling from Smalltalk and Lisp.

I own both "Common Lisp: The Language" and "Smalltalk-80: The Language and its Implementation", and while there are many ways those languages could be described as 'tight' (tightly-coupled, perhaps), at no point can you look at the C language and say "This could be smaller" without significantly removing functionality. Ok, perhaps there are some things around array/pointer syntax, etc. but the room for removing things from the language is very small.

LISP and Smalltalk are both 'kitchen-sink' languages. As I understand it (i.e. Unless I misread something or skipped a page) for an implementation to be a proper spec-conforming instance of Smalltalk-80, a screen and graphics server is required. Indeed, Smalltalk-80 requires a very specific model of graphics display that is no longer appropriate for the time. Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.

On the other hand, all of the C that is there serves a purpose.

Perhaps the only thing in your list that does feel like a tight design in addition to C, is FORTH. But FORTH puts the burden for the programmer to remember what is on the stack at any given time. It has some beauty, indeed, but all of the abstractions seem inherently leaky. I haven't programmed in FORTH, however, so I can't really talk more about how that plays out in practice.

If the "There is nothing else to remove" does not resonate with you, then I think the perspective of the OP, and myself, and others, when we call C a "small"/"tight" language, is that essentially, C was born out of necessity to implement a system. Conversely, the 'batteries included' aspect of Smalltalk and Lisp more or less presume the existence of an operating system to run. It feels like the designers often did not know where to stop adding things.

Most of the library functions in C, can be implemented very trivially in raw C. Indeed, much of K&R is just reinventing the library 'from scratch', there is no need to pull out assembly, or any more assumptions about the machine other than "The C language exists". Whereas, a lot of the libraries of Smalltalk and Lisp seem bound to the machine. Not to harp on too much about the graphics subsystem of smalltalk, but you couldn't really talk about implementing it without knowing the specifics of the machine. And while much of Lisp originally could be implemented in itself, Common Lisp kind of turned that into a bit of a joke. Half the time when using it, it is easier and faster to reimplement something than find whether it exists.

Apologies if this is repetitive or does not make much sense.


I agree with you, but perhaps you are reading “tight” slightly differently than the way the original poster intended it?

To me, ANSI C is “tight” in the sense that it is made up of a small set of features, which can be used together to get a lot done. But the design of the features, as they relate to each other, can feel somewhat inelegant. Those different features aren’t unified by a Simple Big Idea in the way that they are in Lisp or Smalltalk.

Lisp and Smalltalk, then, have “tight” designs (everything is an s-expression/everything is an object) which result in minimal, consistent semantics. But they also have kitchen sink standard libraries that can be challenging to learn.

(Although to be fair, Smalltalk (and maybe Common Lisp to a lesser extent) was envisioned as effectively your whole OS, and arguably it is a “tighter” OS + dev environment than Unix + C...)

FWIW, I am learning Scheme because it seems to be “tight” in both senses.


It sounds like you're talking about the standard library rather than the language? The examples I gave have a very small language where you really can't remove anything, whereas in C quite a lot of the language is rarely-used, redundant, or bodged: the comma operator surprises people, for and while do overlapping things, braces are mandatory for some constructs but not for others, null and void* are horrible special cases.

Standard libraries are a different matter, but I'm not too impressed by C there either; it's not truly minimal, but it doesn't cover enough to let you write cross-platform code either. Threading is not part of the pre-99 language spec, and so you're completely reliant on the platform to specify how threads work with... everything. Networking isn't specified. GUI is still completely platform-dependent. The C library only seems like a baseline because of the dominance of unix and C (e.g. most platforms will support BSD-style sockets these days).

I'm actually most impressed by the Java standard library; it's not pretty, but 20+ years on you can still write useful cross-platform applications using only the Java 1.0 standard library. But really the right approach is what Rust and Haskell are doing: keep the actual standard library very small, but also distribute a "platform" that bundles together a useful baseline set of userspace libraries (that is, libraries that are just ordinary code written in the language).


> Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.

Common Lisp is kind of a language and a library. Parts of the library would be optional. Some standardization efforts (EuLisp, R6RS, ...) tried to defined a Lisp like-language in layers/modules/libs later on.


I think one's love/comfort with C depends one's path into software development. It was my first language (aside from Basic). It has it's quirks and you can do some really bad/dangerous things when writing large apps.

That said, given my comfort level and tendency to be more explicit rather than tricky, my C coding ends up surprising me a week or so later when I go back and realize "even at 2am" I did the right thing.

It's also a fun language to mess with junior programmers with.


but you can't pass functions in C, you can only pass pointers-to-functions... and you can return them too.

the piles of stuff that C contains are the stuff of 20th century von Neumann architectures, the stuff that defines the undefined behaviors, and that's what has made C indispensible.

I'm not saying C's perfect, but you can't swap it out without replacing what it did and does.


Hi, I vouched for this comment, so people can see it. You're shadowbanned, it seems. I took a glance at your comments and I can't really see a reason for it, many of your comments are borderline but I've seen worse here by 'top rated' commenters.


That's the same with languages that have first class functions. The code part is still passed by reference, not by copying around the machine code.


Of course, almost every high-level language that compiles to native code ends up doing something that looks like sugared C for many of their features.


That first class functions are passed by reference can be verified at the high level; e.g. with the eq function in Common Lisp.


UB is what makes C fast versus what PL/I variants and Fortran optimisers were already able to do without such tricks.


UB is not what makes C fast. A good (and fast) program does not contain undefined behaviour, at least ideally.

The only speed advantage of Fortran, to my knowledge, comes from pointer aliasing information that C compilers have a harder time to infer. But that's more a consequence of the programming domain. Fortran is not a systems programming language. Fortran has specialized data structures for scientific computing built in (I think??). It's an apple and oranges comparison.


Fran Allen begs to differ.

Plus the experience from all from us that had access to early C compilers on CP/M and home micros.

IBM already had a LLVM like toolchain for their RISC research with PL.8 and respective OS.

Only later did they switch to UNIX due to the market they were after.

Surviving mainframes are still writen on their system languages.

As for Fran Allen point of view:

"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."

-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming


Why do you keep sidestepping logical arguments with irrelevant quotes? Correct programs don't contain UB (even the fast ones), so UB is not what makes C fast.

Not checking for UB (aka invalid configurations) at runtime is what makes C (and any other language) fast - or more precisely, it's what allows compilers to emit efficient code.

Some languages / compilers rule out UB statically through the type system or other means, but that comes with tradeoffs that might, or might not, be worth making in your domain.


Because C guys keep spreading the false arguments that C was some kind of language sent by god to solve all performance issues, while we in the trenches when the language sprung into existence know it wasn't never the case.

Not to mention the fact that other programming languages, on the mainframe and workstation space, were already starting to collect the benefits of whole program optimizers.


...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: