Hacker News new | comments | show | ask | jobs | submit login

IMHO there are many obscure things about Rust for newbies, and its syntax is also less readable than that of Ada. This gives me high confidence that it has a bright future.

I'm not being sarcastic here, Rust seems to be a great language (definitely on my list to learn), I'm really just observing. Looking at the success and history of programming languages it seems to me that many if not most professional programmers want some obscurity, a desire that is often disguised as an apparent need for brevity which in reality plays no role with modern editors.

Maybe this makes sense, after all it's better to have a language that an incompetent boss or a marketing department cannot just pick up easily to 'roll their own solution'.






Which evidence do you see that points to obscurity or partial obscurity leading to success (or adoption, I can't tell which you're claiming)?

Not evidence, only subjective gut feelings, and I've had adoption in mind. But I'm hesitant to answer this question, because it will invariably lead to language flame-wars. Anyway, it seems to me that Algol68, Pascal, and Ada are less obscure than C and C++, but the latter are clearly more popular and have proven their worth over a long time.

At the same time, implicit-block type procedural languages like Visualbasic and Realbasic/Xojo are popular, but they do not get much love from professional programmers, even though they are maybe the least obscure among all imperative mainstream languages. LISPs are also an interesting case, S-expressions are among the least complicated and easiest to grasp syntax and CommonLisp offers practically everything a programmer could ever wish for (including multiple dispatch) but it never became very popular.

But maybe I'm wrong. For example, advanced Lisp macrology is definitely obscure, so that should count in favor of Lisp being more popular according to me 'hypothesis'. Haskell syntax is fairly easy, but monads are certainly obscure, yet Haskell does not seem very popular. Likewise, Python does not seem to be very obscure, it certainly isn't syntax-wise, and it's still very popular.

So I don't know, perhaps my assessment was premature. It almost certainly was. ;-) I've just always had this impression that many programmers prefer a certain amount of special trickery and shortcuts over clarity. But perhaps it's the availability of libraries what counts most in the end.


C is FAR less obscure than most of the languages you mention, ignoring things newbies generally don't need to worry about like standards, compiler implementations, memory alignment, virtual memory, etc.

I was able to learn C reasonably well as a child precisely because of how straightforward it is. You can reserve a block of memory. You can free it. You can get the address of a value. You can get the value from an address. It works in an extremely straightforward way, unlike most languages. To a child, programming in C feels like the computer is doing exactly what it was told to do and nothing else. It feels like you understand what is going on, even though your understanding is a very simplified model of what is actually going on.

Modern languages like C#, Javascript, Ruby... They are much harder to understand than C. The first time I saw a closure, I was completely dumbfounded. This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack! What sorcery is this?? Even today things often confuse me, when I saw C# async/await for the first time I just couldn't understand how it can possibly work until I spent some time reading about it and I had been programming for decades by then including a decade in C#.


> This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack! What sorcery is this??

This is an issue with learning a new language when you've already got your mind molded to a different one.

JS programmers have the exact opposite issue when learning C. Why can't we do this? Why must we worry about scopes for the stack? Wtf is the stack? I wonder what happens if I try to use this variabSEGMENTATION FAULT.

These languages generally are dealt with differently in one's mind, and trying to apply the mental model from language 1 to language 2 usually makes it feel harder than it really is.

I really could say the same about "it feels like the computer is doing exactly what you told it to do" about the other languages you listed. A crucial part is that you don't actually need to understand what the computer is doing under the hood to use it, especially in a higher level language. C is more like "you were forced to talk to the computer in its own language", whereas with JS/C#/etc you don't have to know what addresses and memory are. You just tell the computer what tasks you want done, and how to model them, and it does them.

So I'm very skeptical that the other modern languages you listed are harder to understand than C. They're just different.


The only problem is, I've used C# and JS way more in my life than C, and C wasn't even my first language (QBASIC was).

So unless learning C somehow causes permanent disability I will have to respectfully disagree.


Right, my point is that it's highly subjective and depends on how you approach things. I know programmers who have learned C later and they've always found the whole having-to-deal-with-memory thing an unnecessary distraction that they spend way too much time wrangling with, even after spending lots of time with C. I'm one of these; I have done lots of C programming but find languages like Python to be easier to learn.

C sort of does cause a disability (not really a disability, this is a useful skill!) in the sense that now you're actually thinking about how a language feature works in the hardware, which is not something you necessarily need to do for higher level languages. The moment you ignore all this it's likely that the language will make sense again. Of course, like I said, this is all subjective. The solution to "I can't use async/await because I don't understand how they work" is basically that you pretend that they're magic, and try to figure out the internals on the side. You need not understand how things work under the hood to use them, though it is an extremely valuable skill to know what C#/JS/etc end up doing at the lower level.

It's a sort of "ignorance is bliss" situation.

It varies from person-to-person though. It's not specific to programming either. When I was a physics student I personally would be easily able to define layers of abstraction in my mind and work within one layer, even if I didn't understand another -- you ignore how the other layer works and assume that it is magic (while simultaneously working to understand the other layer). One of my friends could not do this -- he had to understand rigorously how the theorems worked all the way down. As you can imagine, this required a lot of effort. It meant that he always had a thorough understanding of the topic at hand, but often meant that he spent a lot more time to understand something and had no middle ground to choose.

For me, this has carried over to programming. If I'm dealing with a language like C or C++ I will try to understand how everything works on the metal. When I first learned about C99 VLAs I spent an hour mucking with a debugger and inspecting the stack. When I realized that C++ had function-local initialized statics I spent many hours digging through C++ spec/implementations to understand how it worked (http://manishearth.github.io/blog/2015/06/26/adventures-in-s...). These are things at the same abstraction level as C/++ which I must understand to be able to use them.

But when I'm dealing with languages like JS, I can operate perfectly fine without knowing how they work. Indeed, when I first learned about generators, I was able to use them for quite a while before I learned how they work. I still want to know how things work (and generally dig into it), but it doesn't block me from proceeding if I don't.

This is not how everyone approaches languages. Like I said, it is subjective.


> This doesn't make sense, a function is accessing a local variable of another function that has since been popped off the stack!

That's only weird to you because C doesn't do it.

For someone starting in one of those languages, it simply makes sense that I can use any variable in scope. (Stack allocation? What is this sorcery you speak of?)


So you're saying that my problem is having basic computer knowledge?

Because stack allocation has nothing to do with C, C is not explicit about doing it (except alloca) just like most other languages, and other languages use it just like C does, including C# with its closures and async/await. It even has stackalloc.


Your problem is knowing how most C implementations work and trying to extrapolate that knowledge to other languages.

In C, a variable's lifetime is limited to the current function call. The natural implementation of C is to track both function calls and variables on stack structure with statically determined offsets. Thus, C has stack-allocated variables.

Since a Clojure variable's lifetime is not limited to the current function call, C's implementation details aren't relevant. Stack-allocated variables are a bizarre notion.


For some reason (as I didn't mention Clojure which I know nothing about) you are making the incorrect assumption that my confusion with certain features comes from trying to apply how another language works to a different language.

In reality it comes from trying to apply how that language works in normal cases (in the absence of those features), and the features requiring special compiler magic to work.

Yes if variable lifetime is as you describe in Clojure then that sentence doesn't make sense for Clojure.

As I've already told someone else, it is unlikely that learning C at some point in your life (not as my first language, not as the language I use professionaly, not even in my top 3 most used languages) permanently cripples your ability to understand other languages.


Yes but stack allocation is completely transparent in all languages except C. Well, almost, maybe all low level languages, the point still stands.

In any non-systems language you don't need to know (and knowing it doesn't help you with anything) this stack sorcery you're talking about.


How is stack allocation less transparent in C than in other languages? I don't understand this idea that the stack has something to do with C.

You can mess with the stack in C, you can also mess with the stack in C#. But you don't have to, and normally you don't. You use it in a completely transparent manner, not caring where the compiler decides to put your data.


> A function that accesses a local variable of a function that has since been popped off the stack

Here is where it's less transparent. In a language like javascript there is no concept of a stack so what's in the quote doesn't make any sense.


In the C language there is no "concept of a stack" either because the stack is not a language-level feature in most languages, it is a mechanic relevant to how your code is compiled / executed. And javascript engines, when executing JS, do use the stack. You can see a stack trace for your code in Chrome...

When it comes to stack allocation, in C this is the compiler's decision and in JS this is the execution engine's decision.

In JS, that quote makes perfect sense, and the answer is: the engine checks what can be safely put on the stack, and the stuff that is closed over is stored on the heap to avoid losing it when the function returns.


> I was able to learn C reasonably well as a child precisely because of how straightforward it is. You can reserve a block of memory. You can free it. You can get the address of a value. You can get the value from an address.

And everything function parameter is passed as value. In many 'advanced' languages, some things are passed by value, other by reference depending on the type, sometimes automatically, sometimes manually, it took me years to figure that out.

Same things for loops.

And I still have nightmares about some languages memory models: what is allocated where? what kind of allocation was performed on the object that this function returned, do I have to free it manually or will it disappear when I leave the scope? which scope? is it on the stack or is it the GC that takes care of it? when does it take care of it? should I call it manually?


With the amount of undefined behavior that C contains I find it hard to imagine that anyone would call it a straightforward language. Thinking in terms of the underlying hardware and memory can be misleading when one is confronted with things like signed integer overflow and TBAA.

A big problem is that many of us that used to be on the Algol/Pascal camp had this utopic idea that engineers care for quality and would choose tools that provide such support.

Wirth has a nice paper where he writes about the error of not doing any PR and thinking "build and they will come" would just work.

https://www.inf.ethz.ch/personal/wirth/Articles/Modula-Obero...

Check the "Conclusions and Reflections" chapter.


I share your gut feeling. It sometimes appears as if technologies that lack the opportunity for some to advance to magician status also lack a certain type of following.

That doesn't mean all technologies that have the necessary properties to allow some people to appear super human necessarily become popular. It just makes these technologies more sticky if they manage to somehow stay afloat for long enough.

I'm not sure whether or not these obscure sides of technologies are generally a bad thing. If a language provides opportunity for those who really understand it very deeply to come up with much better solutions, then that is a good thing. I would say this is the case with maths for instance, or rather with particular fields within maths.

But there are other much less favorable examples where it feels that these supposedly smart solutions shouldn't be necessary in the first place and getting even trivial things consistently right requires memorizing book length lists of rules and exceptions. CSS and C++ come to mind.


Actually Haskell strives to be unpopular. Their community has a nice motto for it, but at the moment it escapes me.


"Avoid success at all costs."

The fact that not every language on the planet is an implementation of Actionscript 3

I don't think the spread in difficulty between a non-obscured production programming language versus a highly obscured language falls within the range of a boss or marketing department.

I feel that what the general populace would require, if you wish to see programming languages more widespread, would be an ease of interface akin to a quality WYSIWYG editor, which companies have been working on for awhile. I would love to use such an editor myself, instead allocating my new cognitive savings for problem solving.

The gulf between WYSIWYG and the least-obscured / fewest constructs / best ergonomics / best developer UX / etc language still seems rather immense, even if you're thinking about Scratch. In Scratch you're dragging and dropping statements of logic, whereas in a WYSIWYG editor you're dragging and dropping elements of the page as you actually want it to look. The latter is much more declarative, and in range of what small business people want. I would argue that dragging and dropping for loops in Scratch really only helps for those who have keyboard difficulties. It's still a for loop.

I'd also add as a side point that obscurantism doesn't always mean bad ergonomics or bad UX. Sometimes it protects cognitive resources, and other times, when you are forced to dig, the bet for cognitive savings fails.


What's syntactically obscure about Rust?

I only knew(and liked) C reasonably well before Rust. And nothing felt obscure when I started learning it.

I can only remember not getting what `||` meant (in the context of defining a closure without args). The positional meaning of `||` and `&&` is the only thing, I can recall right now, that can be considered obscure syntactically (for C developers at least) . They should have gone with literal `and`/`or` for the operators IMHO.


I suppose it can get a little silly when you mix closures without arguments and the logical 'or' operator', e.g.:

    // prints 'true'
    println!("{}", true||(||true)()||(||true)());
Of course you wouldn't actually do this unless you're being intentionally obscure. Another favorite of mine:

    // prints '()'
    println!("{:?}", (||||||||||())()()()()());
But I can't recall ever running into such silly code, in practice the closure syntax and logical or do not lead to confusion (imho, ymmv).

> in practice the closure syntax and logical or do not lead to confusion (imho, ymmv).

That's true. But put your self in the mind of a C developer looking at Rust code for the first time:

     if a || b {
       println!("True");
     }
Cool.

Then:

     thread::spawn(|| println!("Hello from a thread!"););
What? What is the logical or doing there?

----

IIRC, there are also cases where you have to write `& &` instead of `&&` to not confuse the compiler. That's a design/practical issue.

Both those issues would have been avoided if literal `and`/`or` were used.

I find it interesting how the only thing that momentarily confused me, as a C developer, about Rust syntax, was caused by Rust authors not wanting to syntactically deviate too much from C.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: