Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why it’s necessary to shoot yourself in the foot (g-w1.github.io)
116 points by todsacerdoti on July 13, 2023 | hide | past | favorite | 84 comments


Another example is taking the Advanced Programming course and being exposed to monads without having done a ton of FP without monads.

The “You could have invented monads, and maybe you already did” approach: monads do solve a problem, but if you’re presented with the solution before being exposed to the problem, you’d be like “This is magical, complicated, unnecessary. Why would I ever want the problem that this solves?”


> Why would I ever want the problem that this solves?

This frustrates me a lot when I see our current education system, from the kindergarten to post-grad.

For instance my 6th grader son's book introduces LCM/HCF out of the blue, something like "this is LCM and here's how you calculate it". Not a single motivating example.

Or take design-patterns. I learnt about two dozen of them in college and promptly forgot them within a week. I finally begin to appreciate them after encountering problems at work.

Imagine if carpentry is taught by asking pupils to cram about 150 tools over 3 years. Or they start out by saying "Let's build a chair" and discover the need for one tool at a time and mastering them as they go along. They will also learn that they should look for a tool should they encounter a new kind of task.


“Why would I ever want the problem that this solves?” is the best way I’ve seen that mood captured. Brilliantly put and vital context for teaching.


Related: "If math is the aspirin, how do you create the headache?" https://blog.mrmeyer.com/2015/if-math-is-the-aspirin-then-ho...

I think this is also why I didn't see the point of the design patterns book.


Kubernetes is another good example.

If you don't already have all the problems that Kubernetes solves and the complexity that goes with those problems, adding Kubernetes prematurely gives you problems you didn't have by introducing complexity you weren't (yet) subscribed to.

  A: "What the hell is a PersistentVolumeClaim? Why can't I just write to disk?"
  B: "But what if you don't have a disk at the edge? How does this scale?"
  A: "What edge?! What scale?! I have a disk!"


Most developers I've worked with who were complaining again containerisation and kubernetes never had to deploy code or only had to deploy code to one or two manually set up VPSs. They'd usually always be the most vocal about how its too complex. Or how "we aren't Google"; yes precisely because we're not Google who can afford multiple DevOps departments is why this makes shit easier, since the loudmouths also never want to pick up the slack to get infra shit done.

/rant

The ones that have experience with devops tend to appreciate Kubernetes


I get your point but something like this is good enough for a ton of projects:

    git pull && docker-compose restart


Or if you're super fancy:

  docker pull ghcr.io/NAMESPACE/IMAGE_NAME:latest
  docker-compose restart IMAGE_NAME


This also works pretty well:

  ssh <server>
  cd <directory>
  git pull
  tsc
  pm2 restart all


Or rsync && ln -sfn

No restart or compile because php don't care.


I have about two years of experience with DevOps and Kubernetes.

I manage my personal infrastructure with Terraform.

I still prefer to keep services running with docker-compose when I can.


PV vs PVC still confuses me. And I still don't know/care about edge computing. I mostly wanted infra as code and multiple copies of my app running for resilience. And easy upgrading software versions. And matching dev envs.


That is exactly why I like to learn and teach things in chronological order.

Why study long-obsolete tech?

Because it helps you better understand why the current tech is what it is, and it even makes one think about possible alternatives, which in turn foster innovation much more than merely religiously learning current technology.


Definitely, a great read, addressing the mammoth in the room

> When teaching programming, we should let people make these mistakes, and then show them the tools to correct them.

This was one of my pain points when I was in college studying CS. It was killing me to have the teacher just dictate that programming lecture without showing what problems it solves and/or how difficult our lives would be if we don’t get it right. I still recall that interfaces lecture where the teacher keep telling us to declare some sort of class with some methods but without implementation!! Any class could then benefit from this! How/Why?! I wasn’t able to get this interface thing until I graduated and worked on a personal C# project with WPF where I designed it with a basic architecture of plugins/addons that got loaded at runtime, those plugins should all have some common functions that can be called by the host app, only and only then, I fully understood interfaces and how/why/when to use them. At that point, I wished I can go back to college, explain this concept to students and get back!

I believe the reason why college teachers struggle to explain such concepts in a better way is their lack of experience in the field.


I've been doing only imperative programming in middle school (Turbo Pascal, yay), and when I started to get more curious about programming and started looking into Java, classes vs objects vs interfaces gave me headaches for weeks.

It was only when I picked up a couple of online courses like CS50 from Harvard and CS106 from Stanford (quite outtdated now) that things started to make more sense. Sometimes those things take time and looking at them from different perspectives, and even different programming languages/paradigms.

E.g. I started to appreciate the use of interfaces more when I started using Go, going from Java, where they're implemented implicitly, that I realised they're actually really useful and not just a nuisance.


Plug for one of the best algorithm textbooks, Skiena's "The Algorithm Design Manual". It clearly motivates algorithms with specific problems that needed them and how a good algorithm improves over a naive approach in real-worldish situations. Highly recommended.


“What’s the secret of success? Right decisions. How do you make right decisions? Experience. How do you get experience? Wrong decisions.” – John Wayne


That sounds smart, but in reality it is quite possible to become very good at identifying wrong decisions without developing the capacity to identify good ones.

This happens to ... a friend of mine.


Sounds like story time :)


It turns out you can learn from other people's experiences. This is a valuable shortcut, because it turns out some of those wrong decisions are fatal which makes obtaining experience in those decisions the hard way a bad idea.

Now, granted, too many people / organisations / whole fucking countries aren't even learning from their own experiences, but you should really strive to learn from others too.


> Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.

-- Douglas Adams, Last Chance to See

:-)


He didn't say (though obviously implied) the wrong decisions were yours. I take a broader meaning, basically you can learn from wrong decisions no matter who makes them.

But I take your point.


Just came here to have you check out this -- https://news.ycombinator.com/item?id=36706893


FWIW I'm not difficult to find - tialaramex everywhere is just me again - so it's probably not appropriate to reply to unrelated comments on HN to ask me to look at stuff.

However, responding here since you did, I think you missed the whole point of that sub-thread. People aren't telling you that Bjarne's terrible I/O streams functionality is great so you should use that. They're praising the much more normal looking std::format and related features which have been adopted from fmt.

As a result assessing Bjarne's I/O streams and discovering that they're terrible is not a meaningful reaction to that praise. We know Bjarne's feature is bad, we're talking about a different, much better, feature.


> People aren't telling you that Bjarne's terrible I/O streams functionality is great so you should use that.

Well, std::format is what you actually told me I was a fool and a noob for refusing to use under the pretense of being pragmatic. ("you are deliberately choosing to use poor tools and calling this "pragmatism" because it sounds better than admitting you're bad at this and you don't even want to improve")

So I've finally put in the work and made std::format run on MSVC because that's the only compiler where std::format is available for me (gcc-12 and clang-14 don't include it on my Debian VM).

And guess what, simple file with main() and std::format("Hello {}\n", 42) + fwrite() takes 1.4 seconds to compile there (no linking, no optimizations) compared to 0.16s for compiling a cpp file with the equivalent snprintf/fwrite call.

0.16s is still much but MSVC takes 0.12s for compiling "return 0". Subtracting that, it's a somewhat reasonable overhead of 0.04s for including + using stdio.h / snprintf. And 1.28s, so literally 32x that amount, and realistically a nerve wrecking time to wait for a single file to compile, for the std::format version.

I'm not sure how to take you seriously anymore. It's hard to imagine you've actually used this yourself, because paying more than a second to compile a single print statement is not OK. I wish people like you would just be a little less loud and more considerate when judging people that are trying to actually get something done.

The code under test:

    #include <format>
    #include <string>
    #include <stdio.h>

    int main()
    {
       std::string x = std::format("Hello {}\n", 42);

       fwrite(x.data(), 1, x.size(), stdout);

       return 0;
    }
$ cl /version

    Microsoft (R) C/C++ Optimizing Compiler Version 19.35.32216.1 for x64

Here is a godbolt where gcc-13 with fmtlib setup (as a library) is tested against stdio.h printf(). It's 600-1200ms for std::print(...) vs 60-120ms for printf(...) and just a few ms less for "return 0". (timings are highly variable on godbolt). Again, an overhead of probably some 20x-100x compared to stdio.h printf when we subtract the time for compiling "return 0".

https://godbolt.org/z/Pa4jh8qbz


> paying more than a second to compile a single print statement is not OK.

I should expect most people have more than one. Have you ever seen a benchmark which tried just calling some library feature once and then pronounced it slow? Like, hey, your new hash table is slow, I called insert on it with one value and that took ages / How about the next time? / What do you mean next time, it's slow, I'm never using it again.

Perhaps this all points to a process issue, that your development process is... let's say unusual and that's driven you to make some perverse trade offs because you aren't able to imagine a way to do what you do with the technology everybody else is using. Maybe they're not the crazy ones ?

https://twitter.com/magdraws/status/1551612747569299458

The good news, though probably in the distant future for you, is that C++ standard library modularisation should dramatically speed up compilation since the compiler won't need to laboriously discover all the facts about foo.h over, and over, and over again as it is mentioned in each place it's used.

But you're right, I don't use any of this stuff in anger, I write Rust. I admire fmt because I think it's a remarkable achievement to have a type checked formatter as a library function.


> I should expect most people have more than one.

Oh, I have many seconds, and I'm already spending many of them, to the point that I'm not willing to just give them out for free. In a project with dozens or hundreds of files I'm not willing to introduce a central dependency that slow down each of them by one additional second.

> The good news, though probably in the distant future for you, is that C++ standard library modularisation should dramatically speed up compilation since the compiler won't need to laboriously discover all the facts about foo.h

I'll believe when I see it. Have you tested it? Currently, we can't use modules at work since we're not on the newest compiler.


As a parent, I like put it in a different way:

"The good thing about making mistakes, is that you get to know which mistakes are worth making."


Parenting is all about letting your children experience enough mistakes while avoiding all the fatal ones.


Or you can do an apparently shocking thing to programmers: learn from others’ mistakes. Of course this would require admitting other people aren’t complete idiots and may actually know a thing or two.


I studied Film. Film similar to programming is a field where there is so much to learn and every project might be completely new and follow it's own rules and circumstances.

I had many collegues in unversity that had to shoot themselves into the foot. A normal part of learning. But as we were always working together everybody had the chance to see the others doing so.

Now there were some people who saw others shoot themselves in the foot in a particular way, only to go and do the same thing on their own project. Don't be that type of person. Be the person who observes others shoot themselves into their foot and wonder how you could prevent yourself from doing that in the future (and accept that for some things you can't).

Software engineering is very similar. Sure maybe you are the type of person who can't grok a footgun unless they got shot by it. But you can try not to be that type of person. Making mistakes is important. But make them at home or in toy projects, don't make them in a decade long effort by a ten people team with the data of million people. And if you are in such a project it is part of your job to learn from other similar projects and the ways they shot themselves into the foot.


Film (production) grad here as well. A good film school program is a space where teams of people get to make all their mistakes and figure out what they're about while making horrendously flawed pieces. Everyone exits with a slightly different baggage of experience.

I would add one thought here - it's not just about seeing others make mistakes, or about making the mistakes yourself. It's also about seeing how mistakes can be made across bigger teams or time-frames. Well-intended innocent creative choices at one point in the process can cause serious trouble later. Or two independent choices can create unforeseen problems. That's the sort of experience that's least visible, and that's what experienced mentors can provide.

So, I would argue that it's best to have a healthy mix - you want to see others fail, you want to fail yourself, but then you also want to have mentorship by others who have made these failures themselves. Seeing someone else make a mistake and then making that same mistake yourself can be a very valuable and humbling experience.


Yes and no.

I do agree with you that the profession doesn’t lack its fair share of big egos and self taught arrogant geniuses.

But there is a difficult equilibrium to find between this and the other side of the coin which is the cargo cult of inherited best practices and over engineering.

Most programmers (including sometimes myself) have a hard time choosing the right tool for the right job. But learning to pick the right tool is hard.

Even learning this from others is hard : I’ve happily learned from a lot of specialists how to use a specific tool in depth, but I can count on one hand the number of people who suggested me a better tool for the task.


Yeah, that was my reaction to the article too: of course learning from your own mistakes leaves the most lasting impression, but you don't necessarily have to make all the mistakes yourself (e.g. you don't have to get in a car crash first to learn to drive safely).


Because you probably can extrapolate your experience from learning how to ride a bike, or to walk, so you don‘t need to crash a car. Probably everyone here has the experience, that the faster things collide with you, the more it hurts.


That's the issue with a lot of educational ressource, they don't tell about the problems that are to be solved. For OOP for example, I would an iteratively built ressource that present a problem then a first solution, and the new problems that it introduces, and so on.


I think that is the most obvious optimization of your time.

However I also think that most people need to experience the pain and frustration of hitting themselves with their own decisions a number of times, until they develop enough humbleness to see the benefits of using other's experience instead.


Nothing drive the point home than emotional pain. Of course, if it can be avoided, do learn from those who made mistakes, but sometime the best teacher is doing it yourself.


Let's celebrate that civil engineering isn't as arrogant as software engineering then. If a bridge collapses all the world learns from it. If Microsoft fucks up their Roken Authorization it is just another software error that cannot be helped.


I’m pretty sure civil engineering is based on mostly immutable knowledge of the physics laws which allows best practices no to evolve every year or that when they evolve, a mandatory training is required for everyone.

I’m also pretty sure that students in civil engineering are making virtual bridges collapse every single day.

It’s also true in other engineering branches : we have barely the same planes design since decades and that’s for good reason.

It’s not really something you can do when rules and best practices are going left and right every couple of months.

And don’t get me wrong I’m convinced that the peak of program engineering is in the past. But the industry keep forcing new things to developers.


What I refer to here is the fact that every early bridge collapse has informed the practise of the whole field after¹.

Meanwhile in programming you will literally have someone hash their passwords using md5 in 2023.

Maybe the difference is liability and possibly ending up in jail when things break. Software error has become a common excuse these days. If your companies enourmous unethical fuckup is exposed, just blame it on software error. Everybody understands: software errors happen and there is nothing we can do to prevent them from happening.

Only this is not true. We know how to test code. We know about formally verified code. In safety critical domains (aviation, automotive, industrial, banking, ...) we have been doing this forever.

I am not saying that every garden shed should have to follow the same rules as an interstate bridge, but maybe if our webservices fail spectacularly on a regular fashion maybe we should look towards software engineering practises that prevent things from failing that often?

¹ After that most bridge collapses have been due to organisational failures, like a lack of maintenance or a lack of oversight or enforcement of the established rules


  By three methods we may learn wisdom: First, by reflection, which is noblest; 
  Second, by imitation, which is easiest; and third by experience, which is the 
  bitterest.


How do you know people walked the walk and not only talked the talk? It's a ass in seat times years world.

And honest reflection on failure is punished in hierarchies.


Exactly. You don't have to redo all the mistakes the others did, otherwise they would never be any progress from generation to generation.


This doesn’t work when most people are self teaching.

It’s the job (and value) of a teacher to help you understand not only how but why we do like this.

Since we are more or less thrown is this beautiful "life-long learning" career without professional teachers, it’s way harder to understand the why without exceptionally great mentors. And that’s my pet theory that this is at the roots of cargo cults.


So what you are saying is that the solution is to make software engineering a profession that requires an actual education and certification?

Before civil engineering became a profession all kinds of people would build bridges and all kind of bridges would collapse. The solution wasn't to accept that bridges collapse, the solution was to not allow just anybody to build a bridge and to create standards that bridges had to adhere to.

Also, as a self thought programmer that is regularily shocked of what he finds in the field, done by supposed professionals: Being self thought has nothing to do with it. Because I can also read on someone's experiences and mind set. Every self thought programmer will read books on programming made by people who program. Even self thought programmers will avoid goto atatements like the pest even if they don't know the original reason for that rule.

The problem in software engineering is one of culture. We program shit and move on. If it breaks it is not our fault. Very few people feel ownership and/or responsibility for the code. Granted: Very few companies are willing to pay the money needed to allow for such a process, except if they are in a field where it is required by law (automotive, aviation, ...)

What we need is a clearer liability structure for software errors. If a companies software is investigated after privacy leaks and they are on the hook for each individuals data that they mistreated if the software is not up to standard companies will think twice to cheap out on software engineering. If software engineers are on the hook if a company gave them everything needed to make a safe product and they didn't, they are on the hook.

I don't say everything needs to follow that level/standard, but as of now a software error is treated like a unavoidable act of gods will that nobody could have foreseen or prevented.


even with self-teaching, you look up forums/tutorials/etc. they aren't so direct, but many past mistakes are wrapped up in these places of advice. i imagine there would be at least 9 past mistakes by others incorporated into the lessons and advice (and comments) of the topics, for each mistake you make yourself.


"To damage or impede one's own plans, progress, or actions through foolish actions or words."

Learning C before Rust so you better understand Rust isn't "foolish action" nor does it "impede one's own plans".

I grow tired of authors twisting and abusing the meaning of common English idioms so the author can say "well actually some times you want to shoot yourself in the foot"


The shooting yourself in the foot part is doing memory unsafe things in C where you get a segmentation fault that Rust would have stopped you from doing.


I think what he means isn't the act of learning C first, but rather having the segfault.


So, the foolish action was intentionally creating the segfault?


I liked this article so much.

But i want to attach something on your "some opinion" title's

> You should learn Zig (or C) before you learn Rust

part.

I had learned C before Rust but, vectors are not used in C. It belongs to C++. And although I know C, Rust is still very confusing to me. In my opinion, it would be better learn C++ before Rust.


I learned C a very long time ago, but I'm not at all confident that it would be better, starting now, to learn C first before I learned Rust. I think Rust's design is a cleaner foundation.

Example: (Destructive) Move assignment is the correct thing, if you introduce that to begin with, showing Copy as an optimization, I think that makes more sense than the way it's often taught knowing people are familiar with languages that default to (or only have) copy assignment.

Another example: Rust doesn't have boolean coercion, the boolean coercion is so rife that you find it in otherwise seemingly principled languages, but the consequences are quite awful - in Rust you can cheerfully teach the entire language without this concern, because Rust doesn't have this coercion. "false" is neither true nor false, it's a string, 0 is neither false nor true, it's an integer. Many modern languages avoid C's extreme coercions - you can't write "Ten" + 5 or 'a' + 1 in such languages, (in a few of them it's a concatenate operation, which I don't love either, but at least it's not some weird coercion) - but they still insist "false" is true because it seemed easier and C did it. Rust does not.


What do you mean that there are no vectora in C?

You mean by default?


std::vector is a C++ thing right? There is no vector standard in C as far as I know.


A more general name for this kind of thing is a growable array. The choice to name this "vector" is regrettable given the other things that word means, and the Rust name Vec is at least an improvement on that.

std::vector is famously the only C++ standard library container which isn't crap, but, the fun thing is that it's still a bit crap anyway. Like a lot of C++ standard library features it could be better, but that'd be an ABI break, so too bad either use somebody's custom replacement which is better or put up with the good-but-not-great std::vector


Well any newer language just seamlessly rolls that functionality into regular arrays and calls them lists, Java and C# call it an ArrayList. I'm not sure if there's a standard naming convention for it.

What really disqualifies C for anything practical these days is the strings. In that it has zero native support for them, they're not a first-class citizen as they damn well should be, they're a complete illegal alien. String.h is such a trainwreck from a UX perspective that I despise every minute I ever had to work with it. C++ at least has some out of the box support for them, even if it's very barebones and clunky af.

Like aside from getting an educational perspective of how incredibly limited and poorly designed languages used to be, there's no reason we should still be building our cars out of wood with stone wheels which is what using C (and C++ to some degree for that matter) feels like. For learning how processors and computers work at the lowest level, going with something like assembly on a microcontroller makes more sense anyway.


C++ 17's std::string_view is finally the table stakes bare minimum type, but notice that's a library feature so it's not actually part of the core language. std::string_view is approximately &[u8] that is, it's a non-owning reference to some bytes, it knows where the bytes are, and it knows how many of them. This means it's a fat pointer. In 1975 this was perhaps too expensive to be reasonable, it's 2023, we have enough registers, use a fat pointer and suck it up.

I would like more, indeed a lot more, but in a resolutely bit banging language I will put up with &[u8] and that's what std::string_view gives you in practice. The out-of-box string literal in C++ is unacceptable, as is std::string (an owning, growable string) used for read-only purposes. None of this stuff should have survived into C++ 11, and yet today you will still see it used.


Not having first class "strings" (which are a cloudy concept) is exactly the right call for a systems programming language that is concerned with the harder truth about in-memory structures.

If you want to "join together" strings data and printf/snprintf isn't convenient enough, go ahead and use different tool. But don't act like computers can natively compute the concatenation of two string buffers, and offer you the result into your expecting hands. On a lower level, everything needs to be stored in memory somewhere, and if "just somewhere in a heap allocation" is good enough for your purposes, use a different language but don't expect best performance nor simplicity.


Tbf, anything but bytes is a cloudy concept and not accepting that is just making things harder for yourself for literally no reason. If you have native arrays, you can have native strings, end of discussion.

C doesn't have strings is because there wasn't a well accepted library for them at the precambrian time it was being designed. Otherwise it would've, not out of some ideological fit-for-purpose nonsense. It was supposed to be a high level language, as high as they could go with systems being as crap as they were back then.


> Tbf, anything but bytes is a cloudy concept and not accepting that is just making things harder for yourself for literally no reason.

What I mean is that strings are an abstract concept (roughly, a sequence of characters that can be indexed and possibly mutated) with many possible implementations and runtime characteristics. IMO, for C it wouldn't make much sense to prefer one over the other, other than the obvious statically allocated fixed size array where the compiler can already do all the work.

> If you have native arrays, you can have native strings, end of discussion.

C does have native arrays (of fixed size) and so does have native strings (of fixed size). It even has convenient syntax for instantiating these strings -- string literals. There is even syntax to concatenate them (at compile time) by simple juxtaposition in the source code.

> C doesn't have strings is because there wasn't a well accepted library

Nope, the reason is that "strings" (even if you assume their representation is strictly a tuple of size + pointer to contiguous character buffer) need storage of size that is unknown at compile time.

Storage needs to be managed, and the storage management can be either manually done or be done automatically by the language runtime. C opted to not have any of the latter except for stack variables and global variables (lifetime of process) -- and neither of those support dynamically sized allocations (exclude VLAs here).

If strings were simply an issue of library or syntax support, all the strings you could envision would exist by now. But even the best string libraries (for C) are a bad joke and do almost nothing to make string processing more ergonomic.

Honestly, I don't miss any of that, almost all of my string needs are well covered by printf/snprintf. Once I worked on my own text editor, and for this developped a text rope data structure that kept track of the line endings and the number of unicode code leader bytes etc, and supported efficient string operations at almost arbitrary sizes (easily gigabytes). Most languages probably don't give you this with their built-in "string" type.

What helps though, is to pick your tools according to the job. There is a lot of scripting work or web work that the language is simply not suited for. However for systems programming it's doing fine, giving you a lot of control while automating the most common headaches (register allocation, function calls, computation of data member offsets, etc).

> It was supposed to be a high level language, as high as they could go at the time anyway.

That is patently false. It was supposed to be a language that could allow them to program systems but more conveniently, and it was based on prior art. (There are certain figures on HN that I expect to immediately pop up and add that the art was already quite more advanced and "high level" at the time and C wasn't innovating on any front).


> What I mean is that strings are an abstract concept (roughly, a sequence of characters that can be indexed and possibly mutated) with many possible implementations and runtime characteristics.

Well alright maybe a string is more like a big int or big decimal than a long, but all of these are commonly used enough that they should be part of just about any language. Just treat it as an vector/arraylist that autoexpands to a the new length when it goes over the max and that's that. Maybe having hidden mallocs introduces unclear pitfalls, but frankly so do raw pointers and everything else C does anyway so it's not like it would be much worse in that regard.

> C does have native arrays (of fixed size) and so does have native strings (of fixed size). It even has convenient syntax for instantiating these strings -- string literals. There is even syntax to concatenate them (at compile time) by simple juxtaposition in the source code.

Yeaaah that's almost like a preprocessor thing though isn't it? Unless there's a separate type I've never seen they're still just a char* that's made at compile time.

> text rope data structure that kept track of the line endings and the number of unicode code leader bytes etc, and supported efficient string operations at almost arbitrary sizes (easily gigabytes). Most languages probably don't give you this with their built-in "string" type.

Well there certainly are esoteric cases where having full control can have major performance benefits, but one could always just revert back to a char array if it's actually needed and still have normal strings for 99.9% of daily use.

And well, Microsoft shipped the blazingly fast VSCode that's somehow made in javascript and runs on the molasses that is electron, while outperforming Sublime Text that's native C++. If done right, the VM/compiler should be smart enough to optimize these repetitive things according to best known practices instead of having to do it by hand over and over and over and over.... and probably messing up half the time.

> If strings were simply an issue of library or syntax support, all the strings you could envision would exist by now. But even the best string libraries (for C) are a bad joke and do almost nothing to make string processing more ergonomic.

"No way to fix this" says only language in existence where this is consistently terrible.

Well aside from assembly, but that one gets a pass since calling integers words is a special kind of madness that's not to be messed with.


> Well alright maybe a string is more like a big int or big decimal than a long, but all of these are commonly used enough that they should be part of just about any language.

There is no way to fit what you describe in C. A string needs storage and lifetime management -- not only do you have to create new strings, you also have to delete strings that become unreferenced. There is no way to wrap a nice syntax around this in C to just make temporary strings that get automatically cleaned up. You would have to introduce a dependency on a global heap allocator, and introduce reference counting or similar machinery, and C is simply not about doing that.

And with a more structured approach, that missing syntax doesn't hurt that much. It can feel good to know what lives where and how the storage is managed. If you don't like it, go look someplace else. But don't critique C for concentrating on more basic and essential abstractions.

> Unless there's a separate type I've never seen they're still just a char* that's made at compile time.

Compile time is what I said right? And it doesn't make a char-pointer but a fixed size char-array.

You can have what you want in C++ thanks to RAII, like std::string. Whether the result is worthwhile is another question.

> If done right, the VM/compiler should be smart enough to optimize these repetitive things according to best known practices

User inputs aren't performance sensitive at all. You have a human in front that's sending maybe a dozen Byte/s of data at peak. Any language can handle that.

For visual output you're sitting on top of a browser rendering engine that's highly optimized in C/C++/Rust etc. Billions of dollars have been put into it. It's still certainly possible to use the API (the DOM and CSS) in the wrong way to make it dog slow.

The efficiency of making modifications to the data model underneath is predicated on the selection of data structures. If those are wrong, it will be slow no matter the selection of language or VM/compiler.

The text buffer is certainly one central datastructure that has to be fast. https://code.visualstudio.com/blogs/2018/03/23/text-buffer-r.... One thing to try is finding the "string" in there. See also the "Why not C++" section.


The fundamental type isn't what you're thinking of as "string" roughly C++ std::string

What's fundamental is the reference type, because that's the type you're going to use much more often, this is correctly a slice type, it refers to some bytes, I said above &[u8] is the least capability that's reasonable, and that's at last what std::string_view gives C++

This is one of the many fundamental choices Rust made that's much cleverer than it looks. The str type (some bytes which are promised to be UTF-8 text) is a language feature, a slight improvement on [u8] that's core to the language, however String is just a library type, albeit a very heavily optimized library type. A $1 micro controller might well have some use for &'static str, the immutable slice reference, e.g. to talk about some text baked into its firmware, but it doesn't have a heap allocator, it's not about to waste precious RAM on a dynamic allocator, and so it doesn't need String.


The talking point is a string type that can be used with some convenience. Let's say join multiple of them together at runtime with '+'. Or whatever, maybe just a function call. I was explaining why it doesn't work just like that.

The str type you mention is a nice feature, and in the future when I will have switched to Rust or whatever, I might use it to write my programs in 5 lines less.

While in C I have to use C's "slice" type: char x[] = "Hello". I know it's not quite as good since if I was to pass this around, I would have to make a pointer + length representation for this. If I needed it, it can be automatized from string literals: struct String(const char *buffer, size_t size); #define STRING(lit) (String){lit "", sizeof lit - 1}. Or char buffer[256]; my_api(buffer, sizeof buffer);

For the few situtations where string manipulation is required, it's just not a real problem.


I think it highly depends on the thing you are using. Some tools like Rust have a design highly and obviously centered about the things it's trying to solve. As the author points out, a lot of things "don't make sense" in Rust unless you understand the underlying problem. But the other examples he gives, like "You should write a game from scratch before using Unity" have a different design, that mostly don't force you to hard-to-understand things all the time like Rust does. You would have to dig deeper in your project in order to hit a wall of confusing concepts that required the underlying knowledge. Of course, it's always good to know the underlying stuff, but, meh, time is limited, sometimes you have to do tradeoffs.


You can also shoot yourself in the foot a lot less by having a solid understanding of fundamentals before jumping into coding right away.

Unfortunately learning the fundamentals requires a lot of reading and practicing and most literature and content out there isn't great.

The OP would have known why holding a reference to an item in a list sometimes is not a good idea by knowing the fundamentals of memory allocation, which can be found on many books and articles totally unrelated to Rust.

One of my favorite books for learning fundamentals is Effective Java, even when I wrote very little code in Java during my career I found some of the concepts and ideas in that book to be language agnostic.


> sometimes is not a good idea by knowing the fundamentals of memory allocation

... which can be learned by shooting yourself in the foot with C.

My opinion is, it's best to have done a little low level programming because it will nag you subsconsciously when you go accidentally quadratic or worse in a high level language.


I have a bit of a different take on this.

"Men say that a stitch in time saves nine, and so they take a thousand stitches today to save nine tomorrow." - Henry David Thoreau, Walden.

It's good to not shoot yourself in the foot. Really. But, depending on what you're working on, it may not always be worth the effort to prevent.

For some things, of course it is. If you're working on something either safety-critical or enterprise-critical, you almost certainly want to use whatever tools you can to prevent shooting yourself in the foot.

But sometimes the price of the safety costs more than it's worth. At the extreme other end, if I'm writing an experimental one-off for my own curiosity, I'm not going to worry about satisfying the borrow checker (unless the point of the one-off is to learn Rust). If I'm writing an experimental one-off for work, that's likely to be still true - I'll write it in whatever gets me there fastest, because the point is learning what the results are, not producing production-level code.

Use the level of protection that's appropriate for what you're working on - no less, but also no more.

Note that all this is somewhat orthogonal to the question of whether one needs to shoot oneself in the foot before learning how not to. This is more like, once you've shot yourself in the foot, learn when to leave yourself exposed yourself to the possibility of shooting yourself in the foot again.


Other shots in the foot you'd want: - Trying to refactor a big chunk of your app and have a way to rollback if needed: use Git to do version control - Revert part of a commit containing bad code: make smaller and cleaner commits - Merge huge branches to submit your project due in two hours: create smaller and independent feature branches - Swap an implementation for an improved version without introducing bugs: create regression tests against the interface - Collaborate with people: pick one tool and stick with it - Setup a development environment on different machines: use Docker or other declarative setup - Have projects die because of decision paralysis: start working on it as soon as possible when the general direction is clear - Lose important data because of a dead drive: implement a backup strategy - Have a drive die because of a power interruption: get an UPS


Great point. I remember years back when I tried to learn Webpack and I found it to be unnecessarily complex for me at the time and I did not get which problem it was trying to solve. It was only when I decided to do things on my own way that I hit limits that I had my Eureka moment.

It is sad that newcomers are told to learn this tool and that tool while they never get to know the problem those tools are solving.

Another experience that triggered an A-ha moment for me, was reading the OAuth2 RFC (6749). I had previously read multiple docs of libraries implementing the protocol and had integrated with such libraries in the past, but it was only when I read RFC and the problems the different flows try solve that it finally clicked.


> You should use javac from the command line before using an IDE

That is how my university professor made whole semesters hate programming.

An IDE makes the entry into programming much easier and fun.


> An IDE makes the entry into programming much easier and fun.

then they shouldn't have picked java as an intro language.


"University and high school don't pick Java as the intro language challenge (impossible)"


I think that should be caveated with 'you should try it _once_'.

It is more than possible, but for anything non-trivial, it just sucks.


Using IDEs for java or for python invariably sucks, in my experience. Id pick command line over and IDE for anything but debugging any day. Because on the command line it is very clear what you are actually doing- in an IDE everything is filtered through the invented language of the IDE.


Oh, wait. That's why I hate Java? We used Netbeans in university...


I side with this and like to remind people of Joel Spolsky's words, "all abstractions leak".

It's very gratifying when you're a Unity dev in a room of Unity devs and you're the only person who can diagnose and fix a build problem involving raw OpenGL ES and Objective-C++.

From another angle, these baser systems like C and vanilla JS are _simpler_ and you just have a lot less to learn at once. It's a good waypoint.


> Without shooting yourself in the foot, learning lacks motivation. Complexity without reason is really confusing.

Interestingly not my experience.

I usually start something uselessly complex, with more passion than reason, to learn or brag, then halfway I indeed shoot shoot myself in the foot trying to be clever or lazy.

I would not recommend to look for footguns, they are already looking at you :)


Adding to the list: you should learn a dynamically typed language before a statically typed one


Pedagogy needs major overhaul.


The expert is the person who has failed in more ways than everybody else.


Show closed doors first.


I vaguely remember this as a key concept in game design, however I cannot refer to it as it does not have a canonical name as far as I know. Does it have a canonical name?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: