Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NSA guidance on how to protect against software memory safety issues [pdf] (defense.gov)
130 points by cpeterso on Nov 10, 2022 | hide | past | favorite | 103 comments


The term "memory safe language" (also used in the "Securing Open Source Software Act") is a misnomer. The report kind of acknowledges this "Even with a memory safe language, memory management is not entirely memory safe" but then presses on with the term anyway.

I think they're trying to combine the concepts of languages that have GCs and prevent buffer overflows into a catch-all term, but "memory safe language" isn't a good choice. Non-programmers might take the term at face value.

It's kind of like using the term "safe flame thrower", except no one would be lulled into a false sense of security with that one.

Don't get me wrong, I'd rather use a "safer flame thrower" than a "flame thrower", but I also wouldn't trust anyone selling me a "safe flame thrower".

The above is my best Dijkstra impersonation.


The excuse that "safety" could mean a lot of things, and these languages don't solve all of those things, so therefore they shouldn't count and we should stop saying C and C++ are unsafe is really tired at this point.

The current set of papers for WG21 the "C++ Standards Committee" includes more than one taking this stance, but obviously the standout is P2687 from Bjarne Stroustrup and Gabriel Dos Reis. Bjarne and Gaby describe the situation (in which the US Federal Government has noticed that there's a problem) as an "Emergency"

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p26...

P2687R0 is an initial draft (even by R0 standards it looks very hastily thrown together) but it wrings its hands about the propensity of programmers to write bad code, and then it offers a list of eight kinds of (un)safety, including "delivering a result in 1.2ms to a device supposedly responding to an external event in 1ms". Alas, although C++ doesn't really help you with any of this list, other languages don't solve all of them and so why bother right?

It also takes pains to group together intractable problems (e.g. dynamic allocation but with assurance of no leaks) with solved problems (e.g. preventing use-after-free). The R0 draft doesn't really spend much time explaining how, if at all, they would solve the problem (beyond vaguely "static analysis" so it can't be critiqued on that basis.

I'm sure the committee (of which both Bjarne and Gaby are members) will look favourably on this sort of work, but nobody in government should take anything like this seriously until the actual problem gets solved, ie the programs stop being full of memory bugs.


I think that given the cultural issues in some circles that render most of these attempts as valiant knights trying to defeat dragons, many will only cave in when the goverments decide using this kind of languages is akin to other unsafe products and require similar levels of compliance.

Software can still be delivered in those languages, after all the goverments themselves have millions invested into such codebases, but it comes with an extra price tag versus other safer alternatives.


What about safety net or safety belt or... a safe? None of those are infallible either.

"Memory safe language" is a perfectly good term for the moment since most languages fall square into "completely unsafe" or "completely safe with the ability to be unsafe in special circumstances".

I don't know of any that are "fairly safe". Maybe something like Carbon will be like that.


"safe" doesn't mean "cannot go wrong". See terms such as safety belt or safety shoes, no one expects those to be infallible. It's simply an extra layer of defense.


Non programmers reading this will start throwing it around at meetings prompting some kind of explanation or response from more technical teams. This pressure on the technical teams will probably result in a net reduction of memory errors over time.


As in the managers will forget about this and move on to the next fad shortly thereafter?


Quoting myself from a similar thread today:

Good advice. As a Lisp developer, I have used Swift and it seems like a nicely designed language, very good tooling on macOS/iPadOS and OK on Linux. Go, Rust, and others are also good.

Off topic, but I thought it sad how once 9/11 happened with the side-tracked war on terrorism, the NSA and FBI apparently (from my private citizen perspective) stopped doing much of the previous great work on public support for computer security, going after international computer crime cartels, etc. As a US taxpayer, I would like to see them prioritize that type of work.


Maybe that was deliberate.

It doesn’t make sense to teach terrorists how to secure their systems if you need to exploit these systems to achieve your mission.

Now that Chinese, Russian and North Korean actors are putting in work and causing real economic damage in the west it looks like defence is starting to hold some value again.


Underwhelming report. Boils down to "rewrite it in <memory safe language>; but be careful, even those allow unsafe regions". Also surprising: no mention of Ada/Spark, the language people use to guide rockets and pacemakers? Seems pretty damn safe to me, and it handles more than memory safety, such as integer arithmetic.


To your point, you should probably not do anything the NSA tells you to do. A spy agency’s endorsement of Rust is not to its credit.


If I tell you to jump off a bridge so you do that, you're unwise because you obeyed my instructions without thinking.

If I tell you not to jump off a bridge so you do it, you're still unwise because now you're just defying my instructions without thinking.

Maybe the C++ bridge isn't on fire, but if you think you can smell smoke you wouldn't be alone.


Borders on: Histler was vegan, don't be vegan.


TLDR: Transition from C to Go.

Go was designed by (among others) the father of Unix, Ken Thompson, with an understanding of the mistakes of C and C++. Despite getting hilariously little respect here on Hacker News, the language is (for many purposes) an excellent replacement for C.

(Yes, yes, I know you disagree. Tell me more about how C++ is necessary and all the time you spend fighting it is actually a huge win! Please tell me how slow garbage collection is, all evidence to the contrary!)

Go reads from, and writes to, variable-sized blocks of memory through (pointer, length, capacity) triples, known as "slices". A Go slice "[]int" is essentially a C struct like this, passed around in 3 registers:

  struct IntSlice {
      int* pointer;
      ssize_t length;
      ssize_t capacity;
  };
Bounds checking in Go is easily disabled (pass -B flag to the compiler) but almost nobody does so because the performance benefit is miniscule. With bounds checking and garbage collection, Go avoids almost all memory-related bugs.

You can understand Go as C redesigned for a world where parallel computing (e.g., dozens of CPU cores) is commonplace, and the benefits of safe code outweigh the tiny costs.

We're not using 80286 chips anymore. We can afford safety.


We could already afford safety coding for 80286 using Turbo Pascal, Turbo Basic, Quick Pascal, Quick Basic, Modula-2, Clipper.

Performance stuff was written with inline Assembly, or straight Assembly with nice macro assemblers (TASM, MASM), using C or C++ in detriment of the above selection only helped bringing UNIX stuff into MS-DOS.

EDIT: See books like "PC Intern: System Programming : The Encyclopedia of DOS Programming Know How".

EDIT2: It is available at archive.org, https://archive.org/details/pcinternsystempr0000tisc/mode/2u...


Pascal was(is?) not memory safe in that it still allows use after free. It was safer than C though thanks to making it more difficult to do weird casts or pointer arithmetic. These were normal in C code, in Pascal rather something you go out of your way to do.

Also I think the arrays had bounds checking?


That is the usual excuse to keep writing C code, because alternatives aren't 100% bullet proof, we just keep using the unsafe one.

Yes, it suffers from use-after-free, lets ignore everything else that is safer than C.

Bounds checking are enabled by default and if one wants to shoot themselves on the foot there is always {$R-}.


Oh yes, the language that STILL have null pointer dereference is a good replacement for C in this day and age. Please. Go feels like a person stuck in the 80s idea of a modern programming language. Oh and the idiom of the language is to treat zero of a value the same as nil. Brilliant, now I have a to litter a bunch of checks that time.Time{} is not a valid time because???


Sure: if you don't like C, you won't like Go.

Sounds like C is not your thing, and that's ok.


It doesn’t matter if I like it or not my point is that it is not a C replacement. Unless you are talking about personal preference then sure. But I don’t think we as an industry should look to replace C with Go.


I am not a big Go fan, but it surely is miles ahead of C in terms of language security, and those null pointers are checked, I rather have a crash than a silent memory corruption (segfaults or coredumps don't happen in all platforms with C's null pointers).

So if the choice is between C and Go, for userspace code, Go is the answer.

As proven by UNIX father's design of Inferno and Limbo, where C is only allowed in the kernel and DisVM implementation.


If you don't appreciate, or don't understand, C-style pointers (they're just addresses, where nil is a very useful value) then you won't like C and you won't like Go.

Go is an improved C. It's for people who like C but need C's well-understood problems fixed.

C-style pointers are not a problem. They don't need to be fixed. They're great, and that's why Go retains them.

Here is an example of C-style code making essential use of C-style pointers (especially nil pointers) but it's Go:

https://bugfix-66.com/50788214f539d50382528e86242eb3c846b03f...

You need to be able to represent addresses, and you need to be able to say "address of nothing" (i.e., nil pointer).


Otherwise you have to assign it a value that represents nothing, right? So it’s basically how you want to represent nil/null?


This can feel insurmountable if you didn't know Sum Types are a thing.

Once you've seen Sum Types it's obvious that you just wanted Option<T> - a sum of None and Some of your pointer / reference / whatever type T.

Thus type T must actually point at / be something, whereas Option<T> can be None, and your APIs can be design accordingly e.g. the function which always adds a Dog to this Kennel should take a Dog, not an Option<Dog> that would be silly, but the function to take the most recent addition to the Kennel back out would return Option<Dog> not Dog because if the kennel is empty there is no latest addition to return.


It feels like the discrepency is where the reference is of "one" - the type acts as the reference for one (and then arguably memory efficiency), vs the reference / pointer.

Said in another way... if it makes any sense (this is at the nexus of philosophy and arithmetic):

Since anything resembling emptiness or length of 0 requires a reference then 0 comes after the one (1, monad).

1 is primordial over 0.

Existence wraps emptiness.

For existence to wrap not emptiness nor 0 it must wrap nil. However no matter how you reference nil you must do so through at least one layer of indirection.


The efficiency problem you mention is just a Quality of Implementation issue.

In Rust for example Option<&T> is literally the same size as &T because the &T is implemented as a non-NULL pointer, so the None value fits in a niche where NULL would go if this was a pointer (this is called the Guaranteed Niche Optimization, many other more elaborate niche optimizations are routinely performed by the compiler but are not promised, however Rust promises the optimization we need here will always happen)

The machine code ends up identical to what you'd get from unsafe languages with raw pointers if you remembered all the necessary checks, however the safe source code written by humans has stronger typing preventing them from writing code that would e.g. put a NULL Dog in a Kennel, an easy mistake in the unsafe languages.

Some languages solve just this one narrow problem by having specifically "Nullable" and "Non-nullable" types, plus some way to get the Non-nullable from a Nullable (with a branch if it's null) essentially Option but built-in to the language. But wait, as well as this Option feature you also want Result which is another Sum type. Should you add a special built-in for that too? Some languages choose to do so. OK, and how about ControlFlow? Poll? There will be others. I believe that languages should just suck it up and offer Sum types.


> they're just addresses, where nil is a very useful value

It is actually the other way around, having a way to represent “no pointer” is fine. What’s missing is the ability to check at compile time that a pointer is not nil.


Dynamic memory allocation is a runtime operation and can always fail, so this cannot in general be a compile-time operation if you also want to allow dynamic memory allocation.


There are two common ways to deal with failable operations: either throw/panic/abort instead of returning a value if creating the value fails, or return a container which must be checked for success before the value can be accessed.

The key is that once a value has definitely been obtained, the compiler needs to be able to track that, otherwise it is unclear whether the value needs to be checked again whenever it is used. Often those checks are not where they should be, leading to many bugs.


> the language is (for many purposes) an excellent replacement for C.

No, it's not. The runtime wouldn't even fit on many platforms. GC pauses are similarly not acceptable on soft/real-time systems, etc.

Ada/Rust get closer to a replacement.


Rust really felt like a language that has taken the decades of "lessons learned" and the all-stars features of many many languages and neatly packaged it as an ambitious replacement to C/C++ starting from scratch.


I've heard someone once say "if Rust is a criticism of C++, it comes from a position of respect" and that was definitely true in the early days of the programming language.

Interestingly though, I take the stance that Rust is much closer to C than C++.


I like to put it this way: C++ was supposed to be a "better C," and Rust is a better "better C" than C++.


That is a matter of tooling.

There are bare metal Go runtimes being shipped in products today, e.g. TamaGo unikernel for firmware.

As there are real time Java implementations, there could exist Go ones if the market cared about having them as well.


embedded Java, yes. real time?

Oh:

> The RTSJ addressed the critical issues by mandating a minimum specification for the threading model (and allowing other models to be plugged into the VM) and by providing for areas of memory that are not subject to garbage collection, along with threads that are not preemptable by the garbage collector. These areas are instead managed using region-based memory management. The latest specification, 2.0, supports direct device access and deterministic garbage collection as well.

https://en.wikipedia.org/wiki/Real_time_Java

Albeit 2.0 is still WIP at this time.

TIL


Instead of reading wikipedia articles about standards, see products that have been delivering value in production for the last 20 years.

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/products-services/jamaicavm/


thanks for the pointers

why not just without the part up to the comma? as in

"here are two..."

https://news.ycombinator.com/newsguidelines.html


Probably given the somehow snarnky remark about the state of real time Java specification, as if that is what matters, maybe.


oh it was a real TIL. I first thought: "GCed and real time? not that I know of..." and _then_ I found that spec and deterministic GC and that's the big TIL I shared. sorry if I came across snarky myself.


Honestly, on safety critical systems you should never have used C to begin with. Formal validation is critical, so I'd argue that you should have used Ada SPARK instead of the burning dumpster fire that is MISRA-C or similar.


MISRA-C is far from a dumpster fire if the extremely widespread use in the automotive industry is to be believed!


MISRA-C is barely better than a coding style for which people wrote partial enforcement tools.

Studies suggest some of MISRA's rules if followed reduce significant bugs in software, some others increase bugs and many are neutral because they forbid things nobody who isn't entering an Obfuscated C Contest would think to do even in C.

e.g. IIRC MISRA says don't put variable declarations inside parts of a switch and then use the variables in other parts of the switch. Nobody does that, it's very silly.

Or MISRA says you must have a default switch case. So your three-way switch for the headlights enum, OFF/ DIPPED/ FULL fails because it needs a default. What's the default? Well, I guess you can assert it's never hit? But your C compiler has enum checking already, without a default it would have flagged if you forgot SPECIAL_HACK which is in the enum, but thanks to default the C compiler thinks you remembered about that and at runtime in somebody's car the SPECIAL_HACK is enabled and their car's CPU crashes.

The automotive industry wanted to write C. Or at least, the programmers it hired did, and nobody said "No. That is a terrible idea, stop it" instead they came up with MISRA C to continue excusing the inexcusable.


a = a; //MISRA


Just don't generate garbage. You shouldn't be using malloc anyway you silly embedded programmer.

Half joking, as a C/C++ programmer here.


malloc() isn't the issue, free() is. Allocate at init, never free, and you'll still have deterministic behavior and no heap fragmentation!*

* Unless you run out of memory, of course. Then you have issues.


malloc() on an embedded platform can still present pitfalls. This article explains why:

https://mcuoneclipse.com/2022/11/06/how-to-make-sure-no-dyna...


Yes, you can still run out of memory. But you can't get a fragmented heap if you never free(). And if you allocate only at init, not during the normal operation stage, you can't run out of memory if you manage to start up successfully.


I've spent my last few weeks writing a platform in Go. Buffer management/strong typing is definitely an improvement on C. Shame about tall the null-pointer-exceptions though :')

In all seriousness though it does seem to mitigate a lot of exploitable memory corruption vulns, perhaps not memory-corruption-caused Denial-of-Service vulnerabilities as well as Rust et all maybe.


Go gets little respect on HN? We're not reading the same threads.


It's nowhere nearly as hyped as Rust is here. Which is absurd considering Go is a more appropriate alternative to C.

Even K from K&R uses it!

https://youtu.be/VVpRj3Po6K4


Go should not be compared to Rust. Go is garbage collected. It should be compared to Java or C#.


Sure, but unlike Java and C#, Go is an AOT compiler generating native binaries. It's half-way between Rust and Java/C#.


Java has had AOT compilers since 2000, although they were commercial only, and nowadays there are some free beer ones via GraalVM and OpenJ9.

C# has always supported AOT compilation via NGEN, with the caveat that it needs to be done on-device and only dynamic linking is supported.

Besides NGEN, in the 20 years of .NET history, we had CosmOS AOT compiler, Singularity (whose tech landed on Windows 8 MDIL), Midori (whose tech landed on UWP .NET Native), Mono AOT (still going at it), Unity's IL2CPP, and now Native AOT.


You're right, it should be compared to Mesa/Cedar,

https://www.youtube.com/watch?v=z_dt7NG38V4


Why do people select c? I assume it is because it is the only compiler they have for some target hardware, or that they are working in a codebase like the Linux kernel.

For the former go won't help. Tinygo may help some day, but for any weird new chip the first thing they make is a c compiler.

The linux kernel accepts rust, or will next year. It does not accept go. So Torvalds at least disagrees it fits there.

Go has plenty of uses, but writing microcontroller firmware and device drivers are not among them. And why else would you pick c?


I default to C. It's fast, runs everywhere, works with everything, and has few surprises. It's standardized and not captured by any individual or organization.

I can't think of a language that comes close to any of that.


C++


> has few surprises

C++ has few surprises? This is from Scott Meyers, author of several books on the intricacies of C++:

> I no longer plan to update my books to fix technical errors.It's not that I'm too lazy to do it. It's that in order to fix errors, I have to be able to identify them. That's something I no longer trust myself to do.

He then goes on to explain how the language is so complex, that in two years he has already forgotten enough of the gotchas to trust himself. This is far short of a language with no surprises.

[0] https://scottmeyers.blogspot.com/2018/09/the-errata-evaluati...


You will be surprised with the surprises hidding in C code.

When the choice comes down to C vs C++, the only possible answer is C++ when safety matters.

If more languages are allowed into the decision pool, then both of them should be avoided.


I write in C only when I need to; in my case I have a limited runtime environment available, so I can't use some other language (which I might otherwise prefer) with a larger runtime requirement, and I also already know C, but not C++. Also, none of the libraries I have to use have C++ APIs. Had I already known C++, and if there were C++ APIs available for the libraries I need, then sure, I might have used C++, but as it is, the results (a C++ program but full of C-style programming because of the APIs) would not be worth the effort of learning C++. C++ also imposes a cost; that of reducing possible contributors to those people who already know C++ – knowledge of C is simply more common than C++.

A C++ program which calls nothing but C libraries will not have the “shape” of a C++ program, but essentially the shape and structure of a regular C program. And the small pieces that remain C++-shaped might not be large enough or provide enough architectural benefit for the drawbacks of C++ to be worth it.

(Paraphrased from this thread: https://news.ycombinator.com/item?id=20849570)


C++ provides lots of safety knobs over raw C, without having to go crazy with language features.


If C++ had no operator overloading, and if it had no exceptions, and if the C++ standard specified name-mangling so FFI could work right, then maybe it would come close.


You are not obliged to use all of that in C++ code.

And if lack of standard specified name-mangling disqualifies C++, so it also does to C, because there is no such thing as a C ABI, only OS ABIs that happen to be written in C, and the C compilers of the platform follow the OS ABI, which you can get in C++ with extern "C".


>which you can get in C++ with extern "C".

True, but then you add translation layers everywhere because you used C++ in your function interface.


Are you aware that GCC, clang and MSVC C standard library is actually written in C++?


C++ compilers let you disable exceptions. I always do this when possible.


People keep forgetting that before 1990, C didn't had that market, others did.

Macro Assemblers and other systems languages lost to C.

Heck even using Basic, Pascal, C++ instead of C would already be an improvement, and many don't because of religion not lack of tooling, there are enough vendors.

F-Secure decided they wanted to use Go for writing firmware, and so they did, TamaGo unikernel was born and is shipping into USB security keys all over the world.


That isn't exactly standard go or even tiny go, it is their custom go from one vendor that supports one SOC family from one vendor.

As you say, it isn't hard to design a better systems language than c, but I don't think it is hard to design one better than go either.

The key is that isn't enough to be better. The amount of inertia behind c is so much greater than any of its predecessors just by virtue of time and the growth of the industry.

ADA was better, but had bad timing and bad marketing and was encumbered by price. It still did pretty well just on its merits. But a lot of ada domains got reverted to c or c++ for developer supply and interest. Right or wrong, ada wasn't "cool".

I see rust much farther ahead on the adoption path. And while it isn't perfect or even revolutionary, it is an improvement, and the cool kids like it so management lets me use it.


Just like you won't fit standard ISO C into many embeeded scenarios, and have to deal with a crap of vendor specific extensions but it gets called "C", the usual two weights and two measures, when arguing for C.


No, that is fair, the "c" for TI fixed-point dsps is pretty odd. Andy Tiny go is better than I was first thinking after I dug more. It uses llvm backend to target lots of archs. I'm not clear if you get to keep gc or not. Or what other features you lose.

I'm not defending c, and if my only choices for a new project were a weird flavor c or a weird flavor of go, I can't imagine I'd pick c. Or even c99 vs a weird flavor of go.

But I don't forsee that ever being the choice. What does tinygo offer over other llvm backended languages targeting the same archs? GC seems like that might be the answer if you don't lose it? Channels and greenthreads? Library support if it compiles? I really don't find rust too complicated after using for real work. Both nim and zig seem like better choices than go as well. Even julia if you do some nonstandard stuff. And ada would be the other obvious choice.

Edit: sorry D should probably get a mention here too with new targets having been added.


Have a look at tinygo for embedded and system progamming in Go.

Keep in mind Go has an unsafe statement for manual memory management and pointer arithmetic. You can also import C-style malloc/calloc should you need them.

https://dgraph.io/blog/post/manual-memory-management-golang-...


I mentioned tinygo in my post saying it wasn't quite there yet. Before I retorted I went back to dig in and find out why it didn't seem mature yet. I am changing my mind. It supports lots of architectures through llvm and the features it is missing are mostly the ones that should be.

That said, there are at least a couple languages that let you do manual memory management in safe code (ada and rust). While go jas nice syntax, if I'm giving up gc and the other bits missing from tinygo, what does it offer over D or zig?


> Go is a more appropriate alternative to C.

Go has GC and phat binaries. Frankly for a lot of stuff C# is a way better choice than Go or C++


C# is a good choice if you like the language more or value generics.

I can see how Go's static compilation can be attractive for small cli utilitities.


Use .NET 7 with Native AOT, Mono AOT or IL2CPP.


C# is great for small cli utilities.


There is TinyGo, and its becoming more of a thing, in Go circles.


From what I've seen, any language using GC, gets less respect (overall) on HN. Kind of like, "Real men don't use GC." To include even if the GC is optional. But the kind of funny thing about that is the NSA is recommending various GC languages. Go figure.


Obviously we aren't!


touché


I think go would have gotten a lot more love from C and C++ users if they weren't so resistant to having an optimizing compiler. I'm not referring to removing bounds checks - I'm referring to things like stack-passed function arguments and the normal optimization passes that you get from -O2 in gcc. Sometimes you want a slow compiler that does a lot of heavy lifting.

Now, Rust is the new hotness and it doesn't make a lot of sense to use other things if you care about both safety and speed.


Well, to begin with, Go passes function arguments in registers since 1.17

Go 1.17 implements a new way of passing function arguments and results using registers instead of the stack. Benchmarks for a representative set of Go packages and programs show performance improvements of about 5%, and a typical reduction in binary size of about 2%. (https://go.dev/doc/go1.17)

I think the basic problem here is that you don't know the facts.


I think you misunderstand my point: The problem is that it took until around 2020 for this work to start. Around 2015, a lot of would-be adopters from C and C++ didn't jump on the Go bandwagon because of the perf gap.

I use Go a lot today, as a former user of both C and C++, but Rust is probably better as a C/C++ replacement.


Note that Go is not memory safe for concurrent programs. Memory safety is ensured for sequential code only.


Neither is Rust if the memory segment is accessed via OS IPC instead of threads in the same program, there is nothing that Sync and Send can do to help there.

This is a quite common memory access pattern in HPC.


Do you have some examples of this "common memory access pattern in HPC" ?


Sure, here goes one example,

"A Case Study and Characterization of a Many-socket, Multi-tier NUMA HPC Platform"

https://ieeexplore.ieee.org/document/9306956

Or for something with FPGAs and other cool stuff in-between,

"The ATLAS Data Acquisition and High Level Trigger system"

https://iopscience.iop.org/article/10.1088/1748-0221/11/06/P...


I say we jump straight to Rust. Rust goes farther in terms of stopping bugs even beyond memory safety issues.


I think Ken Thompson gets little respect precisely because he designed Go. It's a large stain on his legacy that has single handedly set back the field of Software Development.

I think that if Go didn't exist he would be considered "one of the greats".


Beg to differ.

Go is an extremely well designed and balanced language, it is extremely productive while still being easy to maintain and have pretty decent performance.


Go is opinionated for sure. And many people think it's the wrong type of direction. Not everyone thinks this, as the poster I replied to likes go. And you do as well. I think that he isn't as respected as other names is because there is a substantial group that feels Go is a step backwards.


This is the NSA here.

Chances are you probably don't want to use memory safe languages now.


RubyTM is a registered trademark of O’Reilly Media Inc. in the United States and/or other countries


Amazing how little things have changed. Till today, ASLR and some basic CFI is all we have?


To steal a comment by Matthew Garret, "No way to prevent this, says users of only programming language where this happens regularly".


Yeah exactly.


Stack cookies, CET(shadow stack) on Windows, XFG in some cases, non-deterministic heaps etc…


Lots of important stuff. But also old.



[flagged]


Don't let the [victim memory-spaces] win!


[flagged]


Also linux and ghidra... have fun with your windows-stack.


I use OpenBSD though...


Without any additional software right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: