Hacker News new | comments | show | ask | jobs | submit login

C is FAR less obscure than most of the languages you mention, ignoring things newbies generally don't need to worry about like standards, compiler implementations, memory alignment, virtual memory, etc.

I was able to learn C reasonably well as a child precisely because of how straightforward it is. You can reserve a block of memory. You can free it. You can get the address of a value. You can get the value from an address. It works in an extremely straightforward way, unlike most languages. To a child, programming in C feels like the computer is doing exactly what it was told to do and nothing else. It feels like you understand what is going on, even though your understanding is a very simplified model of what is actually going on.

Modern languages like C#, Javascript, Ruby... They are much harder to understand than C. The first time I saw a closure, I was completely dumbfounded. This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack! What sorcery is this?? Even today things often confuse me, when I saw C# async/await for the first time I just couldn't understand how it can possibly work until I spent some time reading about it and I had been programming for decades by then including a decade in C#.

> This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack! What sorcery is this??

This is an issue with learning a new language when you've already got your mind molded to a different one.

JS programmers have the exact opposite issue when learning C. Why can't we do this? Why must we worry about scopes for the stack? Wtf is the stack? I wonder what happens if I try to use this variabSEGMENTATION FAULT.

These languages generally are dealt with differently in one's mind, and trying to apply the mental model from language 1 to language 2 usually makes it feel harder than it really is.

I really could say the same about "it feels like the computer is doing exactly what you told it to do" about the other languages you listed. A crucial part is that you don't actually need to understand what the computer is doing under the hood to use it, especially in a higher level language. C is more like "you were forced to talk to the computer in its own language", whereas with JS/C#/etc you don't have to know what addresses and memory are. You just tell the computer what tasks you want done, and how to model them, and it does them.

So I'm very skeptical that the other modern languages you listed are harder to understand than C. They're just different.

The only problem is, I've used C# and JS way more in my life than C, and C wasn't even my first language (QBASIC was).

So unless learning C somehow causes permanent disability I will have to respectfully disagree.

Right, my point is that it's highly subjective and depends on how you approach things. I know programmers who have learned C later and they've always found the whole having-to-deal-with-memory thing an unnecessary distraction that they spend way too much time wrangling with, even after spending lots of time with C. I'm one of these; I have done lots of C programming but find languages like Python to be easier to learn.

C sort of does cause a disability (not really a disability, this is a useful skill!) in the sense that now you're actually thinking about how a language feature works in the hardware, which is not something you necessarily need to do for higher level languages. The moment you ignore all this it's likely that the language will make sense again. Of course, like I said, this is all subjective. The solution to "I can't use async/await because I don't understand how they work" is basically that you pretend that they're magic, and try to figure out the internals on the side. You need not understand how things work under the hood to use them, though it is an extremely valuable skill to know what C#/JS/etc end up doing at the lower level.

It's a sort of "ignorance is bliss" situation.

It varies from person-to-person though. It's not specific to programming either. When I was a physics student I personally would be easily able to define layers of abstraction in my mind and work within one layer, even if I didn't understand another -- you ignore how the other layer works and assume that it is magic (while simultaneously working to understand the other layer). One of my friends could not do this -- he had to understand rigorously how the theorems worked all the way down. As you can imagine, this required a lot of effort. It meant that he always had a thorough understanding of the topic at hand, but often meant that he spent a lot more time to understand something and had no middle ground to choose.

For me, this has carried over to programming. If I'm dealing with a language like C or C++ I will try to understand how everything works on the metal. When I first learned about C99 VLAs I spent an hour mucking with a debugger and inspecting the stack. When I realized that C++ had function-local initialized statics I spent many hours digging through C++ spec/implementations to understand how it worked (http://manishearth.github.io/blog/2015/06/26/adventures-in-s...). These are things at the same abstraction level as C/++ which I must understand to be able to use them.

But when I'm dealing with languages like JS, I can operate perfectly fine without knowing how they work. Indeed, when I first learned about generators, I was able to use them for quite a while before I learned how they work. I still want to know how things work (and generally dig into it), but it doesn't block me from proceeding if I don't.

This is not how everyone approaches languages. Like I said, it is subjective.

> This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack!

That's only weird to you because C doesn't do it.

For someone starting in one of those languages, it simply makes sense that I can use any variable in scope. (Stack allocation? What is this sorcery you speak of?)

So you're saying that my problem is having basic computer knowledge?

Because stack allocation has nothing to do with C, C is not explicit about doing it (except alloca) just like most other languages, and other languages use it just like C does, including C# with its closures and async/await. It even has stackalloc.

Your problem is knowing how most C implementations work and trying to extrapolate that knowledge to other languages.

In C, a variable's lifetime is limited to the current function call. The natural implementation of C is to track both function calls and variables on stack structure with statically determined offsets. Thus, C has stack-allocated variables.

Since a Clojure variable's lifetime is not limited to the current function call, C's implementation details aren't relevant. Stack-allocated variables are a bizarre notion.

For some reason (as I didn't mention Clojure which I know nothing about) you are making the incorrect assumption that my confusion with certain features comes from trying to apply how another language works to a different language.

In reality it comes from trying to apply how that language works in normal cases (in the absence of those features), and the features requiring special compiler magic to work.

Yes if variable lifetime is as you describe in Clojure then that sentence doesn't make sense for Clojure.

As I've already told someone else, it is unlikely that learning C at some point in your life (not as my first language, not as the language I use professionaly, not even in my top 3 most used languages) permanently cripples your ability to understand other languages.

Yes but stack allocation is completely transparent in all languages except C. Well, almost, maybe all low level languages, the point still stands.

In any non-systems language you don't need to know (and knowing it doesn't help you with anything) this stack sorcery you're talking about.

How is stack allocation less transparent in C than in other languages? I don't understand this idea that the stack has something to do with C.

You can mess with the stack in C, you can also mess with the stack in C#. But you don't have to, and normally you don't. You use it in a completely transparent manner, not caring where the compiler decides to put your data.

> A function that accesses a local variable of a function that has since been popped off the stack

Here is where it's less transparent. In a language like javascript there is no concept of a stack so what's in the quote doesn't make any sense.

In the C language there is no "concept of a stack" either because the stack is not a language-level feature in most languages, it is a mechanic relevant to how your code is compiled / executed. And javascript engines, when executing JS, do use the stack. You can see a stack trace for your code in Chrome...

When it comes to stack allocation, in C this is the compiler's decision and in JS this is the execution engine's decision.

In JS, that quote makes perfect sense, and the answer is: the engine checks what can be safely put on the stack, and the stuff that is closed over is stored on the heap to avoid losing it when the function returns.

> I was able to learn C reasonably well as a child precisely because of how straightforward it is. You can reserve a block of memory. You can free it. You can get the address of a value. You can get the value from an address.

And everything function parameter is passed as value. In many 'advanced' languages, some things are passed by value, other by reference depending on the type, sometimes automatically, sometimes manually, it took me years to figure that out.

Same things for loops.

And I still have nightmares about some languages memory models: what is allocated where? what kind of allocation was performed on the object that this function returned, do I have to free it manually or will it disappear when I leave the scope? which scope? is it on the stack or is it the GC that takes care of it? when does it take care of it? should I call it manually?

With the amount of undefined behavior that C contains I find it hard to imagine that anyone would call it a straightforward language. Thinking in terms of the underlying hardware and memory can be misleading when one is confronted with things like signed integer overflow and TBAA.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact