Hacker News new | past | comments | ask | show | jobs | submit | btrask's comments login

I might be the last person to realize this, but did Microsoft name it .NET because they already had COM?



.NET actually was known as COM+ for a time before public release. Environment variables for tweaking internal behaviour still retain that moniker.


COM+ was basically Distributed COM, and was available for years before .NET. .NET Framework was implemented built on existing Win32 and COM/COM+ calls though, which is why you might see that.


Distributed COM was known as DCOM, COM+ became known as CLR.


You're both sort of right. There was something that was released under the name COM+, which was a bunch of services on top of DCOM. But what became the .NET CLR was also internally called COM+ (or part of COM+?) under development.

see https://wiki.c2.com/?ComPlus

I think what happened might have been similar to what ended up happening with the .NET name later - there was a name associated with an umbrella strategy, a bunch of different technical components were under development associated with that strategy, but only some of them were released before the strategy changed again, while others were repurposed/repositioned to be part of the new strategy.

This is a common pattern with Microsoft product/feature naming and I think it's one of the reasons everyone including Microsoft developer relations people routinely comment that Microsoft "not good at naming things". It's continuing now with UWP and WinRT, where those names are actually used to refer to a bunch of different things that were once part of a now-defunct Windows strategy - some of these things are now deprecated, while others (like the WinRT core language interop model) are still the basis of most new Windows API development, but this is very confusing to developers because of their association with the abandoned overall UWP strategy


Unfortunely the only thing that didn't die with the strategy was the deep ingrained love for everything COM that the Windows team has, and they keep going at it without realising the rest of the world is done with COM, and we only endure it due to lack of alternatives in Windows APIs.

If they only had kept the way .NET Native and C++/CX exposed COM, but that would be too easy for their ways, and those tools are now gone.


It was the year 2000, web services was the hype of the .com bubble. So Microsoft pushed the web services in the net. Hence, all the products .NET. Windows Server .NET, Visual Studio .NET, .NET Framework etc.

It was marketing.


Ironically that hype is the 2021 reality with SaaS and Web APIs everyhwere.


This article does not undermine its own point. In fact, very, very few articles ever undermine their own point. In order to undermine your own point it means you've failed to construct a logical chain of thought. But that is what people do all the time in their daily lives. Maybe children would undermine their own points, or someone posting their first ill-thought comment on Facebook. But I think most people will learn how to construct an argument by their second time publishing one.

In this case, the article is not about 'the joys of being too dumb to breathe'. It's about how 1. looking stupid is not the same as being stupid, and 2. looking stupid can be beneficial in the long run. The author does not need to actually be stupid once in order to support this idea.

And I have to worry if you think he's "bragging" about merely looking stupid, as if that weren't bad enough. Maybe if you identify as stupid I could understand the offense.

To the author, Dan Luu: I like your article and I think you're on the right track!


You can construct an argument that we never landed on the moon if you cherry pick your data carefully.

That’s the point being made here: not that his examples are wrong, that they are cherry picked to support his views.

It may be superficially thought provoking, but it is not compelling as a logical argument.

There are tangible downsides to ignoring expert advice; you are not a god. You cannot be an expert at everything.

It is not possible to be an expert at everything.

Therefore, yes, asking questions to understand a topic is good, but no, ignoring the advice of an expert is not good.

The examples given only show examples where the result of ignoring the expert, or third party advice was positive; it can’t possibly be true that this can be the case in all circumstances, except by sheer good luck.

I whole heartedly agree that asking questions is more important than looking smart… but:

> Overall, I view the upsides of being willing to look stupid as much larger than the downsides. When it comes to things that aren't socially judged, like winning a game, understanding something, or being able to build things due to having a good understanding, it's all upside.

You don’t have to look stupid to be able to do all those things, you just have to be humble and work hard.


> ignoring the advice of an expert is not good.

If the person actually is an expert, yes. But actual experts, at least outside hard science domains where we can run controlled experiments to nail down theoretical models to the point where the actually do have high predictive accuracy, are much rarer than most people suppose.

For example, the author says he ignored his doctor's advice; but that only counts as ignoring the advice of an expert if his doctor actually was an expert. Most doctors aren't--in fact, one could argue that no doctors are, since nobody has a really good predictive model for medicine. Many doctors know more than at least a fair number of their patients do, but that's a much lower bar to clear than "actual expert". And given the current state of medicine and the availability of information online, it's pretty easy for a reasonably intelligent person to know more than any of their doctors do about their own particular condition--since they both are more interested in accurate information, and have more time to devote to finding it out.


> You can construct an argument that we never landed on the moon if you cherry pick your data carefully.

This is a really nice, concise way to make the point you are making. Doesn't it seem like this is the central problem with politics today? Everyone has their own data and everyone is logical. You can't have a functional discussion under such scenario. People don't see any problem with their own logic because there isn't any. People can't definitively show a problem with the other's logic because there isn't any.


The article is interesting, but it fails for me to make a convincing case that looking stupid is necessary, most of the time.

Particularly in interviews, what I'd like to read is a reflection not on how to avoid thinking in the way that results in saying or asking things that sound stupid, but how to keep the same internal process without communicating the results in a way that confuses quite so much.

An analogy: a mathematician proves a non-obvious theorem. In their proof, they skip so many steps that it looks like they say intuitively wrong things.

It is NOT that they should stop thinking of these proofs in the same way, it's merely a failure of communication.


Or... the standard just has bugs which could be fixed. Bugs meaning: being out of line with the history of C and large amounts of C code in the wild.

The more people beat the standard drum, the worse things will get until the standard itself is fixed.

Other languages that don't have a standard don't have this problem (but they do have other problems).


Another characteristic of this topic is that people from different camps keep talking past each other, and the discussion goes in circles.

What you propose is basically identical to Regehr's "Proposal for a Friendly Dialect of C". What's different in 2021 that will make it succeed now when it failed in 2014? If anything, there's less interest now, as there are viable alternatives and increased investment in tools that work with standard C.

Small bugs are being fixed. One of the most surprising is that shift-left of a negative number was UB (a fact that would be shocking to anyone in the semi-portable camp, who would reasonably expect to to compile to SAL on x86). Fortunately, this (ETA: hopefully) will be fixed in C2x.

ETA: As of N2596, it's still not fixed in the C2x draft. There certainly have been proposals to fix it, and I thought it had been agreed upon, but if so, it hasn't found its way into the draft yet. In the mean time, good luck shifting those negative integers!


"What's different in 2021 that will make it succeed now when it failed in 2014? If anything, there's less interest now, as there are viable alternatives and increased investment in tools that work with standard C."

You may be right, but many things could conceivably change.

Viable alternatives like Rust may result in C losing ground and feeling pressure to keep up.

Some huge corporation could announce a focus on security and that they will try to minimize C usage.

More people could adopt the goal of a "friendly dialect of C", and be more determined or successful.

Compiler developers could somehow step over the line -- make a compiler that makes such wild optimizations that it results in a backlash.

Or it could just be improved very gradually, with little bits of the "friendly dialect" being adopted one by one rather than all at once.

I'm not saying by any means I'm confident in anything of the sort, but things change a lot over time. Back when I was getting started, Perl was everywhere. Today pretty much nobody does big Perl projects anymore. It had a big miss with Perl 6, and that was bad enough that it got overtaken by competition. While C is much bigger and more resilient I think it's not impossible by any means for it to feel pressure to adapt.

I definitely expect a lot of resistance to change, but the world changes nonetheless.


I didn't like Regehr's proposal because I don't want a friendly dialect of C. I mostly just want C the way it worked up until, say, GCC 4.x.

I don't know specifically how to fix the standard, although I've been thinking about it. A simple idea would be like the Linux mandate "don't break userspace." The language-lawyering has to stop, and more rules are unlikely to help.


"C the way it worked until GCC 4.x" is basically a worse version of friendly C.

You can't on the one hand say "no language lawyering" and on the other hand call certain kinds of optimizations bugs. Compiler developers need to be able to tell something will be considered a bug before users complain--you can always compile on -O0 if you don't want compiler optimizations, and many people consider performance regressions bugs in their own right, so they're going to try to eke out every bit of performance they can on higher optimization levels.


Yeah, in C you need to use assertions (or simple checks) for things that might be null.

That said the compiler isn't infinitely smart (thank god) and complex null derefs will "safely" make it to runtime. (What a sad world we're in.)


If you are transferring ownership, you would do

    p2 = p1; p1 = NULL;
However if you are intentionally doing multiple ownership, then yes you can still have problems.


Wait, what? What happened to "fearless concurrency"? I thought this was supposed to be one of the borrow checker's selling points!

https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h...


I mean, it is, yes. That post is talking about threads. And the “fearless” name meant that it solves a lot of issues at compile time, which it still does in an async context.

Like any static analysis, it’s a give and take between making sure your analysis is sound, while still allowing useful programs.


Whether it's kernel threads or green threads, the same patterns (locks, etc) are possible. Locks are supposed to be the borrow checker's bread and butter, because it can guarantee they are held before accessing shared state. But now you're saying "the borrow checker makes writing code without [async/await] difficult, inefficient, and unergonomic."

I'm not saying locks are better than async/await (although they are[1]). You're saying the borrow checker itself can't handle them in real world use?

[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


I am not saying that the borrow checker cannot handle locks. Locks work great.

(The borrow checker does not understand locks as a special construct, to be extra clear.)

Did you read the post I linked? It lays out the details. I am happy to clarify if you don’t get the specifics.


I see now, I misunderstood your original post. You were saying async/await is necessary because futures work badly, not because all the alternatives (i.e. locks) work badly.

Sorry, my mistake!

Edit to add: futures work badly in every language, so there's no shame in the borrow checker not working with them.

Edit 2: But in that case we're back to "why would Rust want async/await over (potentially green) threads with its first-class support for locks?"


I believe these are the talks from Steve that he's referring to. It was enlightening for me:

1. Rust's Journey to Async/Await - https://www.youtube.com/watch?v=lJ3NC-R3gSI

2. The Talk You've been Await-ing for - https://www.youtube.com/watch?v=NNwK5ZPAJCk

The first video goes into all the bits you're concerned about and all the things that Rust has tried before arriving where they are now


Language support for green threads require a heavier runtime (so you'd pay the performance cost even when you didn't write async code).

Tokio basically is green threads as a library.


Regarding your edit 2, I linked two talks I have that go over this in great detail elsewhere in this thread.


Yes. There are some difficult technical issues. In practice, borrow checker works less well on async code than threaded code.

This is partly why I prefer threading over async in Rust. Look, we went through some enormous effort to make threading good and fun again. Why wouldn't you use threading?


> Why wouldn't you use threading?

Because the C10^nK problem where n increases periodically is still a thing?


I am not solving C10K problem.


Green threads.


Those are async tasks

Or rather -- async tasks are as close as you can get to green threads in rust without a runtime that would impose overhead on every program


Because you cannot win artificial benchmarks with threading.


It's still fearless, as in you don't need to worry that you might create data races, but can be clunky to write in some cases.


If the borrow checker has no representation of a memory model, for example relaxed/acquire/release, you can't write a concurrent queue without triple checking for statement ordering, resulting barriers and then formally verify it otherwise you are very likely to introduce data races.


The borrow checker doesn’t understand orderings, as it doesn’t specifically need to. You can get race conditions, but not data races. Yes, you need to be careful when writing this kind of code.


So over the last thousand years salt has been a hyper-inflationary asset, and you're trying to tell me I'm rich for owning some?


Relatively. Should have said wealthy.


This article is such a great opportunity for introspection ("what do we need in order to do better?"). It's too bad that the top comment has turned it into an opportunity for egotism ("they need us!").


This isn’t idle speculation. I’m not a software engineer anymore. Also, I publish with philosophers.

We do not need input from the typical professional philosopher to do better.


In C you can use [0] for postfix pointer dereferencing.


Alas, that's clumsy, and for declarations not possible. I have used it in expressions at times.

Here's a variation that seems plausible: make postfix p^ be like C's p[0], and infix p^i like C's p[i]. (With a tighter binding for ^ than C has.)


Here's a trick that will actually help produce more secure and reliable programs.

    *outArg = myPtr; myPtr = NULL;
    free(aPtr); aPtr = NULL;
Set your pointers to null when you free them! Set them to null when you transfer ownership! Stop leaving dangling pointers everywhere!

Some people say they like dangling pointers because they want their program to crash if something is freed when they don't expect it to be. Good! Do this:

    assert(ptr);
There are also many more tricks you can do once you start nulling pointers. You can use const to mark pointers that you don't own and thus can't free. You can check that buffers are all zero before you free them to catch memory leaks (this requires zeroing out other fields too of course).

Please, null out your pointers and stop writing (most) use-after-free bugs!


Better then

#define ZFREE(p) do { free(p); p = NULL; } while(0)


You can even do

  #define free(p) do { free(p); p = NULL; } while(0)
And when you want to call the original free:

  (free)(p);
The preprocessor will not substitute this occurence of free.


C n00b here - why the `do {} while(0)`? Couldn't you just do something like `#define free(p) { free(p); p = NULL; }`?


Semi-experienced C user here, I believe the anonymous block is perfectly adequate here. No idea why they are wrapping it in a single instance do loop, unless they’re unaware of block scoping or I’m unaware of some UB here.


do {} while(0) is a common idiom for macros in C, because it consumes the trailing semicolon, which a bare {} block doesn't do.

    if(x) MACRO();
    else something();
expands to

    if(x) { ... }; // Error!
    else something();


Here's the actual macro I (sometimes) use:

    #define FREE(ptrptr) do { \
        __typeof__(ptrptr) const __x = (ptrptr); \
        free(*__x); *__x = NULL; \
    } while(0)
There might be a better way of doing it though. Also, __typeof__() obviously isn't standard C.

Edit to add: I've honestly been moving away from using a macro and just putting both statements on one line like in the OP. For something so simple, using a macro seems like overkill.


What's the benefit of that over nn3's version?


It'll only evaluate the pointer once. It's possible to make this a function though, that might be preferable


Good point. But it seems like it would require usage like this:

    int* p = malloc(sizeof(int));
    FREE(&p);
What if we instead define the macro like this:

    #define FREE(ptr) do { \
        __typeof__(ptr)* const __x = &(ptr); \
        free(*__x); *__x = NULL; \
    } while(0)
Then make usage slightly shorter, as well as more similar to free():

    int* p = malloc(sizeof(int));
    FREE(p);


Taking a pointer-to-pointer is intentional to make it clear that the pointer will be modified. That's actually the most important difference from nn3's version IMHO.


I tried making it a plain function at one point but ran into some weirdness around using void * * with certain arguments (const buffers?). You don't want to accept plain void * because it's too easy to pass a pointer instead of a pointer to a pointer. Using a macro is (ironically) more type safe.

Maybe someone else could figure out how to do it properly, since I'd definitely prefer a function.


Your approach requires extra checks, though, which are easy to forget. Also, NULL is not guaranteed to be the stored as zeros, plus padding is going to make your life annoying.


Well, dangling pointers are also easy to forget... Yes, it requires some discipline. Good code requires discipline, doesn't it?

The trick of checking that buffers are zeroed is purely a debugging tool, so it's okay if it doesn't work on some platforms. And if you allocate with calloc(), the padding will be zeroed for you. It's actually very rare that you will have to call memset() with this technique.


> Good code requires discipline, doesn't it?

This is like the most clichéd way of saying “my code has security vulnerabilities” that there is. I have yet to see code that has remained secure solely on the “discipline” of programmers remembering to check things.

> The trick of checking that buffers are zeroed is purely a debugging tool, so it's okay if it doesn't work on some platforms.

Fair.

> And if you allocate with calloc(), the padding will be zeroed for you.

It might get unzeroed if you work with the memory.


All code is full of vulnerabilites. If you say your code isn't, then I'm sure it is. I just do the best I can to keep the error rate as low as possible. But it's a rate, and it's never zero.

Also, it's not just about vulns in security-critical code. It's also about ordinary bugs. Why not be a little more careful? It won't hurt.

> It might get unzeroed if you work with the memory.

Maybe, but it isn't very common. I'm not sure when the C standard allows changing padding bytes, but in practice the compilers I've used don't seem to do it. And again, it's just a debugging aid, if it causes too much trouble on some platform, just turn it off.


It’s better to have automatic checks than rely on programmers being careful enough to remember to add them. For padding: this probably happens more on architectures that don’t do unaligned accesses very well.


Help me out here, because I'm really trying to understand. Are you saying that dangling pointers that blow up if you double-free them is an "automatic check"? If not, what kind of automatic check are you talking about?

If the extra code is really that bothersome, just use a macro or wrapper function.


It's a much better situation than NULLing them out, because that hides bugs and makes tools like Address Sanitizer useless. A dangling pointer, when freed, will often throw an assert in your allocator; here's an example of how this looks like on my computer:

  $ clang -x c -
  #include <stdlib.h>
  
  int main(int argc, char **argv) {
      char *foo = malloc(10);
      free(foo);
      free(foo);
  }
  $ ./a.out
  a.out(14391,0x1024dfd40) malloc: *** error for object 0x11fe06a30: pointer being freed was not allocated
  a.out(14391,0x1024dfd40) malloc: *** set a breakpoint in malloc_error_break to debug
  Abort trap
As you turn up your (automatic) checking this will be caught more and more often. Setting the pointer to NULL will silently hide the error as free(NULL) is a no-op and nothing will catch it. Thus, the suggestion here was

1. advocating adding additional code, which has historically been hard to actually do in practice, and

2. providing a suggestion that is generally worse.


Good points, thank you for explaining!

I can see an argument for wrapping it in a macro so you can turn off nulling in debug builds (ASan might even have hooks so you can automate this, I know Valgrind does). But use-after-free is worse than just double-frees, and if you read a dangling pointer in production there's no real way to catch it AFAIK. Last I heard (admittedly been a few years since I checked), you're not supposed to deploy ASan builds because they actually increase the attack surface.

So, your program's memory is full of these dangling pointers, and at some point you will have a bug you didn't catch and use one. And you can't even write an assertion to check that it's valid. What do you propose?

And again to clarify, I'm not trying to advocate for hiding bugs. I want to catch them early (e.g. with assertions), but I also want to avoid reading garbage at runtime at all costs, because that's how programs get pwn'd.


> But use-after-free is worse than just double-frees

From an exploitability point of view they are largely equivalent.

As for the rest of your comment: my point of view is largely "you should catch these with Address Sanitizer in debug", so I don't usually write code like "I should assert if I marked this as freed by NULLing it out". If I actually need to check this for program logic, then of course I'll add something like this.

The macro you suggest would alleviate my concerns, I suppose, and it wouldn't really be fair for me to shoot that solution down solely because I personally don't like these kinds of assertions in production. So it's not a bad option by any means, other than my top-level comment of this requiring extra code. I know some libraries like to take a pointer-to-a-pointer so they can NULL it out for you, so that is an option for your wrapper. And a double-free that doesn't crash can sometimes open up exploitable bugs too since it messes with program invariants that you didn't expect. But these are much rarer than the typical "attacker controlled uninitialized memory ended up where it shouldn't" so it's not a big deal.


Very reasonable! Thank you for the discussion :)


> I have yet to see code that has remained secure solely on the “discipline” of programmers remembering to check things.

That's not what the parent comment said.


I’m not sure what it could have said after saying that programmers should have disciple after I mentioned that their thing required extra checks to work.


Parent said: "Good code requires discipline, doesn't it?"

You retort: "I have yet to see code that has remained secure solely on the “discipline” of programmers remembering to check things."

I think that is a dishonest misrepresentation of what the parent comment said, isn't it?


The “discipline” in this case (see the whole thread) is “have programmers remember to insert checks”, which has historically been a good way to have security holes crop up. So I’m not sure what was dishonest about it?


They argued that discipline is necessary, not sufficient, to produce good code. You represented the argument as: "discipline is sufficient for secure (good) code"

You took the original argument, changed it to be fallacious, and used it as a strawman. That's what was dishonest about it.


I appreciate you defending me, but I don't think he was trying to be dishonest.


I don't think that's fair in this case because nulling out pointers isn't the first line of defense. If you forget to do it once, it's not going to cause a bug in and of itself. You can easily grep the code periodically to find any cases you missed.


I think that's the misunderstanding, then, because to me it seemed to be a defensive coding practice (I think it was certainly presented as such in the top comment). My "you need extra checks" claim was mostly aimed at the additional things you add on to your code assuming that you are now zeroing out freed pointers, which I think can lead to dangerous situations where you may come to rely on this being done consistently when it's a manual process that is easy to forget.

Left unsaid due to the fact I was out doing groceries this morning when I posted that was that I don't think this is even a very good practice in general, as I explained in more detail in other comments here.


Indeed, it shouldn't be a first line of defense (nulling + an assert seems reasonable, fwiw), and accessing a nulled out pointer is just as UB as any other UB. It's probably more likely to crash immediately in practice, but it's also easier for an optimizer to "see through", so you may get surprising optimizations if you get it wrong.

Honestly, unless you really cannot afford it time-budget wise, I would just ship everything with ASAN, UBSAN, etc. and deal with the crash reports.


Shipping code with Address Sanitizer enabled is generally not advisable; it has fairly high overhead. You should absolutely use it during testing, though!


That's why I added the "afford it" bit.


> NULL is not guaranteed to be the stored as zeros

Is that a real issue, though?

> padding is going to make your life annoying

Just memset?


>> NULL is not guaranteed to be the stored as zeros

> Is that a real issue, though?

Of course, it's not, but that's one of those factoids that everyone learns at some point and feels like needing to rub it into everyone else's face assuming that these poor schmucks are as oblivious to it as they once were. A circle of life and all that.


Forgive me for encouraging the adoption of portable, compliant code to those who may not otherwise be aware of it. If you want to assume all the world’s an x86 that’s great but you should at least know what part of your code is going to be wrong elsewhere.


AMD GPUs use a non-zero NULL pointer[0].

[0] https://reviews.llvm.org/D26196


Interesting. And it isn't even always non-zero; sometimes it's 32-bit -1, sometimes it's 32-bit 0 and sometimes it's 64-bit 0:

https://llvm.org/docs/AMDGPUUsage.html#address-spaces


NULL is required to be stored as all zeros on POSIX systems.


Please don’t get me wrong but these precautions sound like you are sweeping problems under the carpet which will come out one day back again. It sounds like you have ownership issues in the design and trying to hide ‘possible future bugs’.

Do you use sanitizers for use-after free bugs? I see many people still don’t use them even though sanitizers have become very good in the last 5-6 years


It's defensive coding. Do you think defensive driving is 'sweeping problems under the carpet'? (It is, but it's still useful...)

I use every tool at my disposal. Sanitizers, static analyzers... and also not leaving dangling pointers in the first place. Why would I do anything less? It doesn't cost anything except a little effort.

Take a look at this recent HN link: https://www.radsix.com/dashboard1/ . Look at all those use-after-free bugs. Even if it only happens 1% or 0.01% of the time... It's a huge class of bugs in C code. Why not take such a simple step?


If it works for you, then it is okay. It is not ‘a little effort’ for me to worry about someone else might use this pointer mistakenly, so I need to think about that all the time. It shifts my focus from problem solving to preventing future undefined behavior bugs. These bugs in the link, I don’t know C++, it is a big language which does a lot of things automatically, so it is already scary for me :) Maybe that is it, I write C server side code mostly(database) with very well defined ownership rules. Things are a bit more straightforward compared to any c++ project I believe. I just checked again, we don’t have any use-after free bugs in the bug history, probably that is because of %100 branch coverage test suite + fuzzing + sanitizers. So I rather adding another test to the suite than doing defensive programming. It is a personal choice I guess.


Generally, it is considered preferable to find problems as early as possible. If a program fails to compile or quickly crashes (because of a failed assertion), then I consider that better than having to unit test and fuzz test your code to find that particular problem.

As an added benefit the code also becomes more robust in the production environment, if there are use cases you failed to consider -- 100% branch coverage does not guarantee that there are none!


> Generally, it is considered preferable to find problems as early as possible.

Whole heartedly agree.

> If a program fails to compile or quickly crashes (because of a failed assertion), then I consider that better than having to unit test and fuzz test your code to find that particular problem.

This confuses me. My typical order would be:

fails to compile > unit test > quick crash at runtime > slow crash at runtime (fuzzing)

I am curious to understand why we differ there.


Every problem can be solved in many different ways. If you think you've already got use-after-free bugs under control, then more power to you! You absolutely have to concentrate your effort on whatever your biggest problems are.

But I'll also say that if you don't have any use-after-free bugs in the history of a large C codebase... you might not even be on the lookout for them? I still have them sometimes, mainly when it comes to multiple ownership. And those are just the ones I found eventually.

So yes, different strokes for different folks, but if you make the effort to incorporate tricks like this into your "unconscious" coding style, the ongoing effort is pretty minimal. Even if you decide this trick isn't worth it, there are countless others that you might find worthwhile. I'm always on the lookout for better ways of doing things.


I meant no use-after-free bugs in production, otherwise we find a lot in development with daily tests etc. but looks like we catch them pretty effectively. It works good for us but doesn’t mean it’ll work for all other projects, so yeah I can imagine myself applying such tricks to a project some time, especially when you jump to another project which has messy code, you become paranoid and start to ‘fix’ possible crash scenarios proactively :)


Big reason for defensive coding like nulling pointers is to make the code fail hard when someone messes up when they make a change. One can imagine the sort of hell unleashed if later the code is changed to make use of a dangling pointer. That's often the type of bug that slips through testing and ends up causing rare unexplained crashes/corruption in shipped code. Worse it can take multiple iterations of changes to finally expose the bug.


This makes UAF easier to detect but double-free impossible to detect. I would consider that to be worse than not doing anything at all, especially since it isn't amenable to modern tooling that is much better at catching these issues than hand-rolled defensive coding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: