Why would you tune it? Because, if you set it high enough, then for all your short-lived actors, memory allocation will become a no-op (= bump allocation), and the actor will never experience enough memory-pressure to trigger a garbage-collection pass, before the actor exits and the entire process heap can be deallocated as a block. The actor will just “leak” memory onto its heap, and then exit, never having had to spend time accounting for it.
This is also done in many video games, where there is a per-frame temporaries heap that has its free pointer reset at the start of each frame. Rather than individually garbage-collecting these values, they can all just be invalidated at once at the end of the frame.
The usual name for such “heaps you pre-allocate to a capacity you’ve tuned to ensure you will never run out of, and then deallocate as a whole later on” is a memory arena. See https://en.wikipedia.org/wiki/Region-based_memory_management for more examples of memory arenas.
I think pools and arenas mean pretty much the same thing. https://en.wikipedia.org/wiki/Memory_pool I’ve mostly heard this discussed in terms of pools, but I wonder if there’s a subtle difference, or if there’s a historical reason arena is popular in some circles and pool in others...?
I haven’t personally see a per-frame heap while working in console games, even though games I’ve worked on probably had one, or something like it. Techniques that I did see and are super-common are fixed maximum-size allocations: just pre-allocate all the memory you’ll ever need for some feature and never let it go; stack allocations sometimes with alloca(); and helper functions/classes that put something on the stack for the lifetime of a particular scope.
A pool or slab allocator separates allocations into one of a range of fixed-size chunks to avoid fragmentation. Such allocators do support object-by-object deallocation.
Totally agreed that they aren't required for shipping great console games (and they're really hard to use effectively in C++ since you're pretty much guaranteed to have hanging references if you don't have ascetic levels of discipline). This is mainly just meant as a "here's an example of how they can be used and are by at least one shop".
Like make any pointer to the per frame allocation be a TempPointer or something and then assert they're all gone with a static count variable of them? Then you just have to be cautious whenever you pass a reference to one or convert to a raw pointer.
I don't think this would be too awful for performance in debug builds.
The point though is that it's a step back still from shared_ptr/unique_ptr by becoming a runtime check instead of compile time.
I'd say that arenas are kind of a superset of both what you and I are talking about.
From that standpoint, you could also categorize arenas on a priority basis. This one is for recovery operations, this one for normal operation, and whatever is left for low priority tasks.
That is clever and beautiful. Have to look for chances to do similar to see if I can establish a new habit myself.
One can also do that in stages:
- allocate a large block at startup
- when running out of memory, reallocate it at a smaller size and warn the user
- when running out of memory again, free the block and attempt an orderly shutdown.
> A region, also called a zone, arena, area, or memory context, is a collection of allocated objects that can be efficiently deallocated all at once.
Like, as a embedded developer, these concepts are used pretty much every day. And in a good chunk of those, deallocation isn't allowed at all, so you can't say that the definition is around deallocation at once.
You can also see how glibc's malloc internally creates arenas, but that's not to deallocate at once, but instead to manage different locking semantics. https://sourceware.org/glibc/wiki/MallocInternals
There are implementations of arenas that, basically, act as separate memory managers. You can allocate and free memory inside them at will, but can also deallocate the whole thing in one sweep. The latter can be a lot faster, but of course, it requires you to know you can throw away all that memory in one go (handling web requests is the exemplar use case)
No, that's a cache miss.
Just think about how expensive it would be to allocate a 3d vector consisting of 3 floats with malloc() 20000 times and then later deallocate it. Nobody is worrying about the cost of writing 3 floats to RAM. Everyone is worrying about the cost of malloc traversing a free list and it causing memory fragmentation in the process. Meanwhile the arena allocator would be at least as efficient as using an array of 3d vectors.
It's a cost that has to be paid when using bump pointer allocation.
> We are talking about the cost of obtaining a chunk of memory. The cost of actually creating an object is allowed to be much higher because constructors are allowed to execute any arbitrary code.
Accessing main memory is about two orders of magnitude slower than accessing L1. For that time you can run a lot of arbitrary code that accesses data in L1 and registers.
> Just think about how expensive it would be to allocate a 3d vector consisting of 3 floats with malloc() 20000 times and then later deallocate it. Nobody is worrying about the cost of writing 3 floats to RAM. Everyone is worrying about the cost of malloc traversing a free list and it causing memory fragmentation in the process. Meanwhile the arena allocator would be at least as efficient as using an array of 3d vectors.
malloc doesn't mandate free lists, other implementations exist. It's not about relative costs. OP claimed bump pointer allocation to be a "no-op" when it's clearly not.
Where can someone (i.e., in my case a software engineer who's working with Kotlin but has used C++ in his past) read more about modern approaches to writing embedded software for such systems?
I'm asking for one because I'm curious by nature and additionally because I simply take the garbage collector for granted nowadays.
Thanks in advance for any pointers (no pun intended)!
I currently work on spacecraft flight software and the only real advance on this project over something like the space shuttle that I can point to is that we're trying out some continuous integration on this project. We would like to use a lot of modern C++ features, but the compiler for our flight hardware platform is GCC 4.1 (upgrading to GCC 4.3 soon if we're lucky).
Or, to put it another way, if there's a wasp in the room (and there always is), I'd want to know where it is.
Secondly, vendors make modifications during their release process, which introduces new (and fun!) bugs. You're not really avoiding hidden wasps, just labeling some of them. If you simply moved to a newer compiler, you wouldn't have to avoid them, they'd mostly be gone (or at worst, labeled).
Also, think about when flight software started being written. Was Rust an option? And once it came out, do you expect that programmers who are responsible for millions of people’s lives to drop their decades of tested code and development practices to make what is a bet on what is still a new language?
What I find interesting is this mindset. My conservativeness on a project is directly proportional to its importance / criticality, and I can’t think of anything more important or critical than software that runs on a commercial airplane. C is a small, very well understood language. Of course it gives you nothing in terms of automatic memory safety, but that is one tradeoff in the list of hundreds of other dimensions.
When building “important” things it’s important to think about tradeoffs, identify your biases, and make a choice that’s best for the project and the people that the choice will affect. If you told me that the moment anyone dies as a result of my software I would have to be killed, I would make sure to use the most tried-and-true tools available to me.
It wasn't, but Ada probably was (some flight software may have been written before 1980?), and would likely also be a much better choice.
This kind of physics-based safety is obviously not possible for airplanes.
If all electronics fry simultaneously, then the reactor core cools in-place.
I think that’s the problem here - it’s important to analyze orders of magnitude accurately. C isn’t a little more conservative than Rust or Ada. It is orders of magnitude more conservative.
Rust interops with C seamlessly, doesn't it? You don't have to throw out good code to use a better language or framework.
C may be statically analyzable to some degree, but if Rust's multithreading is truly provable, then new code can be Rust and of course still use the tried and true C libraries.
Disclaimer: I still haven't actually learned any Rust, so my logic is CIO-level of potential ignorance.
It’s tempting in a lot of cases to read the data sheet and determine that the product is good enough. But there are a lot of engineering and organizational challenges that aren’t written in the marketing documents.
Those challenges have to be searched for and social and technological tools must be developed to solve those challenges.
As an exercise in use of technology it looks easy but there’s an entire human and organizational side to it that gets lost in discussions on HN.
From someone who works in a mixed C + Rust codebase daily (Something like 2-3M lines of C and 100k lines of Rust), yes and no. They're pretty much ABI compatible, so it's trivial to make calls across the FFI boundary. But each language has its own set of different guarantees it provides and assumes, so it's easy to violate one of those guarantees when crossing a FFI boundary and triggering UB which can stay hidden for months.
One of them is mutability: in C we have some objects which are internally synchronized. If you call an operation on them, either it operates atomically, or it takes a lock, does the operation, and then releases the lock. In Rust, this is termed "interior mutability" and as such these operations would take non-mutable references. But when you actually try that, and make a non-mutable variable in Rust which holds onto this C type, and start calling C methods on it, you run into UB even though it seems like you're using the "right" mutability concepts in each language. On the rust side, you need to encase the C struct inside of a UnsafeCell before calling any methods on it, which becomes not really possible if that synchronized C struct is a member of another C struct. 
Another one, although it depends on how exactly you've chosen to implement slices in C since they aren't native: in our C code we pass around buffer slices as (pointer, len) pairs. That looks just like a &[T] slice to Rust. So we convert those types when we cross the FFI boundary. Only, they offer different guarantees: on the C side, the guarantee is generally that it's safe to dereference anything within bounds of the slice. On the rust side, it's that, plus the pointer must point to a valid region of memory (non-null) even if the slice is empty. It's just similar enough that it's easy to overlook and trigger UB by creating an invalid Rust slice from a (NULL, 0) slice in C (which might be more common than you think because so many things are default-initialized. a vector type which isn't populated with data might naturally have cap=0, size=0, buf=NULL).
So yeah, in theory C + Rust get along well and in practice you're good 99+% of the time. But there are enough subtleties that if you're working on something mission critical you gotta be real careful when mixing the languages.
Do you have a citation for that, because it seems obviously wrong (since the slice points to zero bytes of memory) and I'm having trouble coming up with any situation that would justify it (except possibly using a NULL pointer to indicate the Nothing case of a Maybe<Slice> datum)?
0: by which I mean that Rust is wrong to require that, not that you're wrong about what Rust requires.
`data` must be non-null and aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as data for zero-length slices using NonNull::dangling().
So yes, this requirement allows optimizations like having Option<&[T]> be the same size as &[T] (I just tested and this is the case today: both are the same size).
I'm not convinced that it's "wrong", though. If you want to be able to support slices of zero elements (without using an option/maybe type) you have to put something in the pointer field. C generally chooses NULL, Rust happens to choose a different value. But they're both somewhat arbitrary values. It's not immediately obvious to me that one is a better choice than the other.
> having Option<&[T]> be the same size as &[T]
That is literally what I mentioned as a possible reason ("except possibly ..."), but what I overlooked was that you could take a mutable reference to the &[T] inside a Option<&[T]>, then store a valid &[T] into it - if NULL is allowed, you effectively mutated the discriminant of a enum when you have active references to its fields, violating some aspect of type/memory safety, even I'm not sure which.
> C generally chooses NULL, Rust happens to choose a different value.
It's not about what pointer value the langauge chooses when it's asked to create a zero-length slice, it's about whether the language accepts a NULL pointer in a zero-length slice it finds lying around somewhere.
And yet you seem to write with such confidence. /Are/ you a CIO? It’s the only thing that makes sense.
- There’s a high risk of bugs in the compiler/standard library in languages with lots of features
- Usually, the manufacturer of an embedded platform provides a C compiler. Porting a new compiler can be a LOT of work, and the resulting port can often be very buggy
- Even if you can get a compiler to work, many newer languages rely on a complicated runtime/standard library, which is a deal-breaker when your complete program has to fit in a few kilobytes of ROM
Often, the only high-level language available for an embedded platform is a standard C compiler. If you're lucky.
The JPL coding guidelines for C  are an amusing, first-hand read about this stuff. Not sure if you would qualify them as "modern approaches".
I am guessing the idea is to catch runtime errors in the test phase, and assertions are disabled for the production build.
Didn't know that there was a term (i.e., real-time computing) for this kind of systems / constraints.
Fully static allocation is the norm though for most "small" embedded work.
In other words, that sounds like a system where dynamic memory management is significantly riskier and harder to test than usual!
Why not static allocation, but sharing memory between the greedy chunks of code that can't run parallel to each other? (I assume these chunks exist, because otherwise your worst-case analysis for dynamic memory would be exactly the same as for static, and it wouldn't save you anything.)
That's what I wanted to say with my comment actually.
Electronics components for trajectory tracking and guidance for a particular missile weren't running fast enough, namely the older CPU that the software was targeting. The solution to this was to overclock the CPU by double, and redirect a tiny amount of the liquid oxygen that happened also to be used in the propellent system to cool down the electronics.
This apparently worked fine - by the time the missile ran out of LOX and the electronics burned themselves out, it was going so fast on a ballistic trajectory that it couldn't be reasonably steered anyway.
The telemetry for the self destruct was on a different system that wasn't overclocked, in case of problems with the missile.
Edit; ofcouse HN reacts pedantic when I claim good programmers always consider memory leaks wrong. Do I really need to specify the obvious every time?
In this case you normally want to allocate an arena yourself.
And now the compiler can no longer be embedded into another application, e.g. an IDE.
It's a reasonably pragmatic way of thinking, but beware the consequences. One benefit of working with custom allocators is that you can have the best of both worlds. Unfortunately, custom allocators are clumsy to work with.
In the case of compiler, one solution would be to replace all calls to `malloc` with something like `ccalloc` that simply returns pieces of a `realloc`'d buffer which is freed after the in-IDE compiler has finished compiling.
In theory, you could have a cluster of identical nodes each handling client requests (i.e. behind a load balancer). Each node would monitor its own application memory utilization and automatically cycle itself after some threshold is hit (after draining its buffers). From the perspective of the programmer, you now get to operate in a magical domain where you can allocate whatever you want and never think about how it has to be cleaned up. Obviously, you wouldn't want to maliciously use malloc, but as long as the cycle time of each run is longer than a few minutes I feel the overhead is accounted for.
Also, the above concept could apply to a single node with multiple independent processes performing the same feat, but there may be some increased concerns with memory fragmentation at the OS-level. Worst case with the distributed cluster of nodes, you can simply power cycle the entire node to wipe memory at the physical level and then bring it back up as a clean slate.
In this case I assume that a massive amount of testing mitigates these issues however.
In this particular case, correctness was not primarily assured by a massive amount of testing (though that may have been done), but by a rigorous static analysis.
In postgres memory contexts are used extensively to manage allocations. And in quite few places we intentionally don't do individual frees, but reset the context as a whole (freeing the memory). Obviously only where the total amount of memory is limited...
It may be unwise to overide static analysis (a leak is found) with hueristics (the program won't run long enough to matter)
In the case of memory management, it is not enough to just free it after use; you need to ensure that you have sufficient contiguous memory for each allocation. If you decide to go with a memory-compaction scheme, you have to be sure it never introduces excessive latency. It seems quite possible that to guarantee a reallocation scheme always works, you have to do more analysis than you would for a no-reallocation scheme with more memory available.
Ideally we'd be able to tie such assertions into a unified static analysis tool, rather than having humans evaluate conflicting analyses. And god forbid the hardware parameters ever change, because now you need to re-evaluate every such decision, even the ones nobody documented. Case in point: Arianne 5 (not exactly my original scenario, but exactly this one -- 64bit -> 16 bit overflow caused a variety of downstream effects ending in mission failure).
The Ariane 5 issue is not, of course, a memory leak or other rescource-release-and-reuse issue. It is a cautionary tale about assumptions (such as the article's authors assumption that memory leaks are always bad.)
I agree that in general leaking resources is bad, but sometimes it is good enough by a large margin. Just a guess.
My favorite trick to optimizing some systems is to see if I can mlock() all of the data in RAM. As long as it's below 1TiB it's a no brainer - 1TiB is very cheap, much cheaper than engineer salaries that would otherwise be wasted on optimizing some database indices.
It is in the sense that it gets the speedup job done.
I might have to start saying "a binary order of magnitude" instead of "double" when circumstances call for gobbledygook.
Until you have ten thousand machines in your cluster…
I mean just think about how many VMs you can buy for $200k-$500k/yr (total cost to the company of a senior engineer).
 and a network and network admins and server admins and and and
And a lot of languages, and for sure newer version of the JVM, do exactly that, they don't free memory, and doesn't run the garbage collector since the available memory gets too low. And that is fine for most applications.
For those, it's often the case that they allocate-only, and have a cleanup block for resources like file handles which must be cleaned up; any error longjmps there, and it runs at the end under normal circumstances.
This is basically using the operating system as the garbage collector, and it works fine.
By design, there was no memory management. The memory was only ever allocated at the start and never de-allocated. All algorithms were implemented around the concept of everything being a static buffer of infinite lifetime.
It was not possible to spring a memory leak.
But there are whole classes of applications that are also mission critical -- an example might be software driving your car or operating dangerous chemical processes.
For automotive industry there are MISRA standards which we used to guide our development process amongst other ideas from NASA and Boeing (yeah, I know... it was some time ago)
Imagine a simple example of a webapp and number of user sessions.
Instead of the app throwing random errors or slowing down drastically, you could have a hard limit on the number of active sessions.
Whenever the app tries to allocate (find a slot) for a user session but it can't (all objects are already used), it will just throw an error.
This ensures that the application will always work correctly once you log in -- you will not experience a slowdown because too many users logged in.
Now, you also need to figure out what to do with users that received an error when trying to log in. They might receive an error and be told to log in later, they might be put on hold by UI and logged in automatically later or they might be redirected by loadbalancer to another server (maybe even started on demand).
When you start doing this for every aspect of application you get into situation where your application never really gets out of its design parameters and it is one of the important aspect to get an ultra stable operation.
It has limited application, but there is a more common variant: let process exit clean up the heap. You can use an efficient bump allocator for `malloc` and make `free` a no-op.
(I am unable to find a link that talks about that, however).
In general, throwing away at once the set of the things together with the structures that maintain it is always faster than throwing away every item one by one while maintaining the consistency of the structures, in spite of the knowledge that all that is not needed at the end.
An example of arenas in C: "Fast Allocation and Deallocation of Memory Based on Object Lifetimes", Hanson, 1988:
Windows has always been my daily drivers, and I really do like it. But I wish deleting lots of files would be much, much faster. You've got time to make a cup of coffee if you need to delete a node_modules folder...
The example I gave was for the old times when people had much less RAM and the disks had to move physical heads to access different areas. Now with the SSDs you shouldn't be able to experience it that bad (at least when using lower level approaches). How do you start that action? Do you use GUI? Are the files "deleted" to the recycle bin? The fastest way is to do it is "low level" i.e. without moving the files to the recycle bin, and without some GUI that is in any way suboptimal (I have almost never used Windows Explorer so I don't know if it has some additional inefficiencies).
I just tried deleting a node_modules folder with 18,500 files in it, hosted on an NVMe drive. Deleting from Windows Explorer, it took 20s.
But then I tried `rmdir /s /q` from your SU link - 4s! I remember trying tricks like this back with an HDD, but don't remember it having such a dramatic impact.
> Deleting from Windows Explorer, it took 20s.
> `rmdir /s /q` from your SU link - 4s
OK, so you saw that your scenarios could run much better, especially if Windows Explorer is avoided. But in Explorer, is that time you measured with deleting to the Recycle Bin or with the Shift Delete (which deletes irreversibly but can be faster)?
Additionally, I'd guess you don't have to wait at all (i.e. you can reduce it to 0 seconds) if you first rename the folder and than start deleting that renamed one and let it doing that in the background while continuing with your work -- e.g. if you want to create the new content in the original location it's immediately free after the rename, and the rename is practically immediate.
I didn't think about renaming then deleting - that's quite a nice workaround!
There are a lot of strategies to apply garbage collection and they are often used in low level systems too like per-frame temporary arenas in games or in short lived programs that just allocate and never free.
I'm sure the technical challenge would be immensely interesting, and I could tell myself that I cared more about accuracy and correctness than other potential hires... but from a moral standpoint, I don't think I could bring myself to do it.
I realise of course that the military uses all sorts of software, including line of business apps, and indeed several military organisations use the B2B security software that my microISV sells, but I think it's very different to directly working on software for killing machines.
One day, I believe during the Iraq occupation, maybe ~12 or 13 years ago, I asked him very directly how he felt about working on these killing machines and whether it bothered him. He smiled and asked if I’d rather have the war here in the U.S.. He also told me he feels like he’s saving lives by being able to so directly target the political enemies, without as much collateral damage as in the past. New technology, he truly believed was preventing innocent civilians from being killed.
It certainly made me think about it, and maybe appreciate somewhat the perspective of people who end up working on war technology, even if I wouldn’t do it. This point of view assumes we’re going to have a war anyway, and no doubt the ideal is just not to have wars, so maybe there’s some rationalization, but OTOH maybe he’s right that he is helping to make the best of a bad situation and saving lives compared to what might happen otherwise.
The US hasn’t been attacked militarily on its own soil in the modern era.
The US military monopoly hasn’t prevented horrific attacks such as 9/11 executed by groups claiming to be motivated by our foreign military campaigns.
I think there is a valid question about the moral culpability of working in this area.
As for the ethics of working on weapons, I think there is a lot of grey when it comes to software. It tends to centralize wealth, since once you get it right it works for everyone. It tends to be dual use, because a hardened OS can be used for both banks and tanks. Even developments in AI are worrying because they're so clearly applicable to the military.
Would I work on a nuclear bomb? No. Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian? Maybe. It's not an all or nothing thing.
> Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian?
The logical extreme of this is Death Note: the person who has the power simply chooses who should die, and that person dies, immediately and with no opportunity for resistance and no evidence of who killed them. Is that your ideal world? Who do you want to have that power — to define who plays the role of an “innocent civilian” in your sketch — and what do you do if they lose control of it? What do you do if the person or bureaucracy to which you have given such omnipotence turns out not to be incorruptible and perfectly loving?
I suggest watching Slaughterbots: https://m.youtube.com/watch?v=9CO6M2HsoIA
Clearly not. Would you please not post an extreme straw-man and turn this into polarizing ideological judgement? The post you’re responding to very clearly agreed that war is morally questionable, and very clearly argued for middle ground or better, not going to some extreme.
You don’t have to agree with war or endorse any kind of killing in any way to see that some of the activities involved by some of the people are trying to prevent damage rather than cause it.
Intentionally choosing not to acknowledge the nuance in someone’s point of view is ironic in this discussion, because that’s one of the ways that wars start.
It is none of those. It is a non-nuanced extreme that is going to cause damage and kill those of us in the middle ground. Reducing it to a comic book is a way to cut through the confusion and demonstrate that. If you have a reason (that reasonable people will accept) to think that the comic-book scenario is undesirable, you will find that that reason also applies to the facial-recognition-missiles case — perhaps more weakly, perhaps more strongly, but certainly well enough to make it clear that amplifying the humans' power of violence in that way is not going to prevent damage.
Moreover, it is absurd that someone is proposing to build Slaughterbots and you are accusing me of "turn[ing] this into polarizing ideological judgement" because I presented the commonsense, obvious arguments against that course of action.
1. Power will only be exercised by the anonymous and the reckless; government transparency will become a thing of the past. If killing the judge who ruled against you, or the school-board member who voted against teaching Creationism, or the wife you're convinced is cheating on you, is as easy and anonymous as buying porn on Amazon, then no president, no general, no preacher, no judge, and no police officer will dare to show their face. The only people who exercise power non-anonymously would be those whose impulsiveness overcomes their better judgment.
2. To defend against anonymity, defense efforts will necessarily expand to kill not only those who are certain to be the ones launching the attacks, but those who have a reasonable chance of being the ones launching the attacks. Just as the Khmer Rouge killed everyone who wore glasses or knew how to read, we can expect that anyone with the requisite skills whose loyalty to the victors is in question will be killed. Expect North-Korea-style graded loyalty systems in which having a cousin believed to have doubted the regime will sentence you to death.
3. Dead-Hand-type systems cannot be defended against by killing their owners, only by misleading their owners as to your identity. So they become the dominant game strategy. This means that it isn't sufficient to kill people once they are launching attacks; you must kill them before they have a chance to deploy their forces.
4. Battlefields will no longer have borders; war anywhere will mean war everywhere. Combined with Dead Hand systems, the necessity for preemptive strikes, and the enormous capital efficiency of precision munitions, this will result in a holocaust far more rapid and complete than nuclear weapons could ever have threatened.
While this sounds like an awesome plot for a science-fiction novel, I'd rather live in a very different future.
So, I hope that we can develop better defense mechanisms than just drone-striking drone pilots, drone sysadmins, and drone programmers. For example, pervasive surveillance (which also eliminates what we know as "human rights", but doesn't end up with everyone inevitably dead within a few days); undetectable subterranean fortresses; living off-planet in small, high-trust tribes; and immune-system-style area defense with nets, walls, tiny anti-aircraft guns, and so on. With defense mechanisms such as these, the Drone Age should be more survivable than the Nuclear Age.
But, if we can't develop better defense mechanisms than killing attackers, we should delay the advent of the drone holocaust as long as we can, enabling us to enjoy what remains of our lives before it ends them.
is as easy and anonymous as buying porn on Amazon
I'm not sure ease of use is such a game changer. You can buy a drone today, completely anonymously, strap some explosives to it, remotely fly it into someone and detonate it, a few hundred yards away from you. Easily available cheap drones like that existed for at least a decade, yet I don't remember many cases where someone used them for this purpose. Does Slaughterbot-like product existence make it easier? If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4? To a terrorist this technology does not provide that much benefit over what's already available. How about governments? I don't see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR. I doubt there is a lack of trigger happy black ops types (or "patriots") ready to do whatever you can program a drone to do. Here I'm talking about democratic first world governments. It's even less clear if tyrannical governments would benefit a lot from this technology - sending a bunch of agents to arrest and execute people is just as effective. I don't think tactical difficulties of finding and physically shooting people is a big concern for decision makers. As you yourself pointed out, Khmer Rouge or North Korea had no problems doing that without any advanced technology.
you must kill them before they have a chance to deploy their forces.
Yes. And that's how it has been at least since 9/11 - CIA drone strikes all over the world. Honestly, I'd much rather have them only have done drone strikes if at all possible (instead of invading Iraq with boots on the ground).
more rapid and complete than nuclear weapons could ever have threatened
Sorry, I'm not seeing it - how would this change major conflicts and battlefields? If you have a battlefield, and you know who your enemy is, you don't really need Slaughterbots - you need big guns and missiles that can do real damage. It's much easier to defend soldiers against tiny drones than against heavy fire. If you don't know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you? As for precise military strikes - we're already doing it with drones, so nothing new here.
end up with everyone inevitably dead within a few days
enjoy what remains of our lives before it ends them
You are being overly dramatic. Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier. Is the world today a scary place to live in? Yes, but for very different reasons - think about what will start happening in a few decades when the global temperature rises a couple degrees, triggering potentially cataclysmic events affecting livelihood of millions, or global pollution contaminating air, water and food to the point where it's making people sick. I really hope we will develop advanced technology by that time to deal with those issues.
But of course it's way more fun to discuss advanced defense methods against killer drones. So let's do that :) I was thinking that some kind of a small EMP device could have been used whenever slaughterbots are detected, but after reading a little about EMPs it seems it would not be able to hurt them much because these drones are so small. I don't think nets of any kind would be effective - I just don't see how would you cover a city with nets. Underground fortresses and off-planet camps can only protect a small number of people. In some scenarios some kind of laser based defense system could be effective (deployed in high value/risk environments), and of course we can keep tons of similar drones ready to attack other drones at multiple locations throughout the city. Neither of these seem to be particularly effective against a large scale attack, and both require very good mass surveillance. I think that a combination of very pervasive surveillance with an ability to deliver defense drones quickly to the area of the attack (perhaps carried in a missile, fired automatically as soon as a threat level calculated by the surveillance system crosses some threshold) is the best option. The defense drones could be much more expensive than the attack drones, so be able to quickly eliminate them. Fascinating engineering challenge!
The US is known to have carried out drone strikes in Afghanistan, Yemen (including against US citizens), Pakistan, Libya, and Somalia; authority over the assassination program was officially transferred from the CIA to the military by Obama. That leaves another 200-plus countries whose citizens do not yet know the feeling of helpless terror when the car in front of you on the highway explodes into a fireball unexpectedly, presaged only by the far-off ripping sound of a Reaper on the horizon, just like most days. The smaller drones that make this tactic affordable to a wider range of groups will give no such warning.
> It’s much easier to defend soldiers against tiny drones than against heavy fire.
Daesh used tiny drones against soldiers with some effectiveness, but there are several major differences between autonomous drones and heavy fire. First, heavy fire is expensive, requiring either heavy weapons or a large number of small arms. Second, autonomous drones (which Daesh evidently did not have) can travel a lot farther than heavy fire; the attacker can target the soldiers’ families in another city rather than the soldiers themselves, and even if they are targeting the soldiers directly, they do not need to expose themselves to counterattack from the soldiers. Third, almost all bullets miss, but autonomous drones hardly ever need to miss; like a sniper, they can plan for one shot, one kill.
You may be thinking of the 5 m/s quadcopters shown in the Slaughterbots video, but there’s no reason for drones to move that slowly. Slingshot stones, arrows from bows, and bottle-rockets all move on the order of 100 m/s, and you can stick guidance canards on any of them, VAPP-style.
> If you don’t know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you?
Yes, it’s true that if your enemy is protected by anonymity, face-recognition drones are less than useful — that’s why the first step in my scenario is the end of any government transparency, because the only people who can govern in that scenario (in the Westphalian sense of applying deadly force with impunity) are anonymous terrorists. But if the terrorists know who their victims are, the victims cannot protect themselves by mixing into a crowd of civilians.
> Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier.
Well, on the battlefield it definitely will drive down the cost per kill, even though it hasn’t yet. It’s plausible to think that it will drive down the cost per kill in scenarios of mass political killing, as I described above, but you might be right that it won’t.
The two really big changes, though, are not about making killing easier, but about making killing more persuasive, for two reasons. ① It allows the killing to be precisely focused on the desired target, for example enabling armies to kill only the officers of the opposing forces, only the men in a city, or only the workers at a munitions plant, rather than everybody within three kilometers; ② it allows the killing to be truly borderless, so that it’s very nearly as easy to kill the officers’ families as to kill the officers — but only the officers who refuse to surrender.
You say “evil governments”, but killing people to break their will to continue to struggle is not limited to some subset of governments; it is the fundamental way that governments retain power in the face of the threat of invasion.
Covering a city with nets is surprisingly practical, given modern materials like Dyneema and Zylon, but not effective against all kinds of drones. I agree that underground fortresses and off-planet camps cannot save very many people, but perhaps they can preserve some seed of human civilization.
You can even drop a grenade from it; Daesh did that a few hundred times in 2016 and 2017: https://www.bellingcat.com/news/mena/2017/05/24/types-islami... https://www.lemonde.fr/proche-orient/article/2016/10/11/irak...
But that’s not face-recognition-driven, anonymous, long-range, or precision-guided; it might not even be cheap, considering that the alternative may be to lob the grenade by hand or shoot with a sniper rifle. If the radio signal is jammed, the drone falls out of the sky, or at least stays put, and the operator can no longer see out of its camera. As far as I know, the signal on these commercial drones is unencrypted, so there’s no way for the drone to distinguish commands from its buyer from commands from a jammer. Because the signal is emitted constantly, it can guide defenders directly to the place of concealment of the operator. And a quadcopter drone moves slowly compared to a thrown grenade or even a bottlerocket, so it’s relatively easy for the defenders to target.
> Does Slaughterbot-like product existence make it easier?
> If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4?
Jeffrey Dahmer wanted to kill a bunch of people. Terrorists want to persuade a bunch of people; the killing is just a means to that end. Here are seven advantages to a terrorist of slaughterbots over a truck full of C4:
1. The driver dies when they set off the truck full of C4.
2. The 200 people killed by the truck full of C4 are kind of random. Some of them might be counterproductive to your cause — for example, most of the deaths in the 1994 bombing of the AMIA here in Buenos Aires were kindergarten-aged kids, which helps to undermine sympathy for the bombers. By contrast, with the slaughterbots, you can kill 200 specific people; for example, journalists who have published articles critical of you, policemen who refused to accept your bribes (or their family members), extortion targets who refused to pay your ransom, neo-Nazis you’ve identified through cluster analysis, drone pilots (or their family members), army officers (or their family members), or just people wearing clothes you don’t like, such as headscarfs (if you’re Narendra Modi) or police uniforms (if you’re an insurgent).
3. A truck full of C4 is like two tonnes of C4. The Slaughterbots video suggests using 3 grams of shaped explosive per target, at which level 600 grams would be needed to kill 200 people. This is on the order of 2000 times lower cost for the explosive, assuming there’s a free market in C4. However...
4. A truck full of C4 requires C4, which is hard to get and arouses suspicion in most places; by contrast, precision-guided munitions can reach these levels of lethality without such powerful explosives, or without any explosives at all, although I will refrain from speculating on details. Certainly both fiction and the industrial safety and health literature is full of examples of machines killing people without any explosives.
5. A truck full of C4 is large and physically straightforward to stop, although this may require heavy materials; after the AMIA truck bombing, all the Jewish community buildings here put up large concrete barricades to prevent a third bombing. So far this has been successful. (However, Nisman, the prosecutor assigned to the AMIA case, surprisingly committed suicide the day before he was due to present his case to the court.) A flock of autonomous drones is potentially very difficult to stop. They don’t have to fly; they can skitter like cockroaches, fall like Dragons’ Teeth, float like a balloon, or stick to the bottoms of the cars of authorized personnel.
6. You can prevent a truck bombing by killing the driver of the truck full of C4 before he arrives at his destination, for example if he tries to barrel through a military checkpoint. In all likelihood this will completely prevent the bombing; if he’s already activated a deadman switch, it will detonate the bomb at a place of your choosing rather than his, and probably kill nobody but him, or maybe a couple of unlucky bystanders. By contrast, an autonomously targeted weapon, or even a fire-and-forget weapon, can be designed to continue to its target once it is deployed, whether or not you kill the operator.
7. Trucks drive 100 km/hour, can only travel on roads, and they carry license plates, making them traceable. Laima, an early Aerosonde, flew the 3270 km from Newfoundland to the UK in 26 hours, powered by 5.7 ℓ of gasoline, in 1998 — while this is only 125 km/hour, it is of course possible to fly much faster at the expense of greater fuel consumption. Modern autonomous aircraft can be much smaller. This means that border checkpoints and walls may be an effective way to prevent trucks full of C4 from getting near their destination city, but they will not help against autonomous aircraft.
> How about governments? I don’t see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR.
This is far from true. The US government has a list of people they want dead who are not yet dead — several lists, actually, the notorious Disposition Matrix being only one — and even Ed Snowden and Julian Assange are not on them officially. Killing bin Laden alone cost them almost 10 years, two failed invasions, and the destruction of the world polio eradication effort; Ayman al-Zawahiri has so far survived 20 years on the list. Both of the Venezuelan governments want the other one dead. Hamas, the government of the Gaza Strip, wants the entire Israeli army dead, as does the government of Iran. The Israeli government wanted the Iranian nuclear scientists dead — and in that case it did kill them. The Yemeni government, as well as the Saudi government, wants all the Houthi rebels dead, or at least their commanders, and that has been the case for five years. The Turkish government wants Fethullah Gulen dead. Every government in the region wanted everyone in Daesh dead. In most of these cases no special PR effort would be needed.
Long-range autonomous anonymous drones will change all that.
> sending a bunch of agents to arrest and execute people is just as effective. ... As for precise military strikes - we’re already doing it with drones, so nothing new here.
Sending a bunch of agents is not anonymous or deniable, and it can be stopped by borders; I know people who probably only survived the last dictatorship by fleeing the country. It’s also very expensive; four police officers occupied for half the day is going to cost you the PPP equivalent of about US$1000. That’s two orders of magnitude cheaper than a Hellfire missile (US$117k) but three orders of magnitude more expensive than the rifle cartridge those agents will use to execute the person. The cost of a single-use long-range drone would probably be in the neighborhood of US$1000, but if the attacker can reuse the same drone against multiple targets, they might be able to get the cost down below US$100 per target, three orders of magnitude less than a Hellfire drone strike.
It’s very predictable that as the cost of an attack goes down, its frequency will go up, and it will become accessible to more groups.
(Continued in a sibling comment, presently displayed above.)
The Slaughterbots video is absolutely awful. First of all quadrocopters have an incredibly small payload capacity and limited flight time. A quadrocopter lifting a shaped charge would be as big as your head and have 5 minutes of flight. Simply locking your door and hiding under your bed would be enough to stop them. The AI aspect doesn't make them more dangerous than a "smart rifle" that shoots once the barrel points at a target.
Do you know what I am scared of? I am more scared of riot police using 40mm grenade launchers with "non-lethal" projectiles who are knowingly aiming them at my face even though their training clearly taught that these weapons should never be used to aim at someone's head. The end result is lost eyeballs and sometimes even deaths and the people who were targeted aren't just limited to those who are protesting violently in a large crowd. Peaceful bystanders and journalists who were not involved also became victims of this type of police violence. 
As for the first line, you assert that real weapons are expensive, unreliable, and kill unintended people. Except in a de minimis sense, none of these are true of knives. Moreover, you seem to be reasoning on the basis of the premise that future technology is not meaningfully different from current technology.
In conclusion, your comment consists entirely of wishful and uninformed thinking.
> The logical extreme of this is Death Note
I don't really deal with logical extremes. It leads to weird philosophies like Objectivism or Stalinism. In international relations terms, I'm a liberal with a dash of realism and constructivism. I don't live in my ideal world. My ideal world doesn't have torture or murder or war of any kind. It doesn't have extreme wealth inequality or poverty. Unless this is all merely a simulation, I live in the real world. Who has the power to kill people? Lots of people. Everyone driving a car or carrying a gun. Billions of people. It's a matter of degree and targeting and justification and blow-back and economics and ethics and so many other things that it's not really sensible to talk about it.
I'm familiar with the arguments against AI being used on the battlefield, but even though I abhor war, I'm not convinced that there should be a ban.
There’s a vast chasm in between right and wrong though. There can be understanding of others’ perspectives, regardless of my personal judgement. And there is also a valid question and tightly related question here about the morals of mitigating damage during a military conflict, especially if the mitigation prevents innocent deaths. If there’s a hard moral line between doctors and cooks and drivers and snipers and drone programmers, I don’t know exactly where it lies. Doctors are generally considered morally good, even war doctors, but if we are at war, it’s certainly better to prevent an injury than to treat one.
The best goal in my opinion is no war.
I will leave the WTC attack on the table, as I’m not interested in a nitpicking tangent about what constitutes an attack in asymmetric warfare vs. “terrorism.”
“The modern era” is usefully vague enough to be unfalsifiable.
Due to the Monroe Doctrine, this is a rational stance for Costa Rica to take. If the US were to adopt this policy, Costa Rica might have to take a hard look at repealing it.
Drones and missiles are definitely a step forward compared to previous technology in many regards, but I can't help but be reminded of people who argued that the development and use of napalm would reduce human suffering by putting an end to the war in Vietnam faster.
For an interesting and rather nuanced (but not 100% realistic) view on drone strikes, I'd recommend giving the 2015 movie Eye in the Sky a watch.
Another issue with drone strikes and missiles is "the bravery of being out of range": it's easier to make the decision to kill someone who you're just watching on a screen than it is to look a person in the eyes and decide to have them killed.
First, I logically agreed that the missiles were supporting our armed services and I believed that our government was generally on the right side of history and needed the best technology to continue defending our freedoms. However, a job, when executed with passion, becomes a very defining core of your identity. I didn’t want death and destruction as my core. I support and admire my college friends who did accept such jobs, but it just wasn’t for me.
Second, I had interned at a government contractor, (not the missile manufacturer), and what I saw deeply disturbed me. I came on to a project which was 5 years into a 3 year schedule, and not expected to ship for another 2 years. Shocked, I asked my team lead “Why didn’t the government just cancel the contract and assign the work to another company?”, her reply, “If they did that, the product likely wouldn’t be delivered in under two years, so they stick with us”. I understood that this mentality was pervasive, and would ultimately become part of me, if I continued to work for that company. That mentality was completely unacceptable in the competitive commercial world, and I feared the complacency which would infect me and not prepare me for the eventual time when I’d need to look for a job outside that company. As a graduating senior, I attended our college job fair, and when speaking with another (non missile) government contractor, I told the recruiter that I was hesitant working for a his company because I thought it wouldn’t keep me as competitive throughout my career. I repeated the story from my internship, and asked if I’d find the same mentality at his company. His face dropped the cheerful recruiter facade, when he pulled me aside and sternly instructed “You should never repeat that story”. I took that as an overwhelming “yes”. So, my concern was that working for this missile manufacturer, this government contractor mentality would work its way into their company (if it hadn’t already), and it would be bad for my long term career. I wanted to remain competitive on a global commercial scale, without relying upon government support.
Software for any system is complex. And it’s quite common for almost every software project to be late on schedule. The Triple Constraint — “schedule, quality, cost: pick any two” doesn’t even fit software engineering in any kind of serious endeavor because it’s mostly a “pick one” scenario.
If you’ve worked on projects where all these three were met with the initial projections, then whoever is estimating those has really made sure that they’ve added massive buffers on cost and time or the project is too trivial for a one person team to do in a month or two.
The entire reason Agile came up as a methodology was in recognizing that requirements change all the time, and that the “change is the only constant” refrain should be taken in stride to adapt how teams work.
The average project achieves 1.5 of the triples.
Here are the true constraints though:
- Meets Requirements
Yes, usefulness and meets requirements aren't the same thing, and anyone who has done the madness of large scale enterprise software will be nodding their heads.
What really bogs down most software projects is that "quality" means different things to different actors in projects. Project Managers want to follow process and meet political goals. Users want usefulness, polish, and efficiency. Directors/management want requirements fulfilled they dictate (often reporting and other junk that don't add to ROI).
And that I like to say "pick two"
We really need more exposure for the things that people like that want to silence..
He was warning the kid that if he went around repeating that aloud he’d burn himself on the interview trail as someone too naive to tow the corporate line and likely to reveal embarrassing workplace details to outsiders.
He was doing the naive youngster a favor, before he could hurt his own career.
The use of the phrase “people like that” is pretty much always pejorative, in a story where a guy who owes the student absolutely nothing took a moment to warn him “don’t touch the stove, you’ll burn yourself”.
So it’s become a story about government contractors instead of a story about “how I fucked up my job search as a new grad.”
Thank you, random kind recruiter guy.
But, no black suits with billy clubs.
And, I’m not suggesting that anything was out of norm for any of these government contractors. They’re delivering a very specialized service with immense regulations. There are very few companies which can produce the same product, so the competition is low and the feedback loop in procurement cycles is much longer.
I hope you are not writing about the US government. I don‘t think the US military can be described as protecting our freedoms after interfering and starting wars all over the world in the past. We are sadly mostly the aggressors and not the defendants.
Big projects are hard and they are frequently late. The fact that it is for the government is largely besides the point.
Well, we are the victors, so far. But the war against ourselves is going quite well.
With the exception of nuclear weapons (that's another topic), missiles are designed to destroy one particular target of strategic importance and nothing more. They are too expensive as mass killing weapons, but they are particularly appropriate for defense.
Without missiles, you may need to launch an assault, destroying everything on your way to the target, risk soldier lives, etc... Less accurate weapons mean higher yield to compensate, so more needless destruction.
War is terrible, but I'd rather have it fought with missiles than with mass produced bombs, machine guns, and worst of all, mines.
Case in point: currently the country with the best army in the world is also the one going the most at war.
I've been in a similar situation, and I think there is something important to think about: Assuming you'd be working for the defense of a country with a track record of decency (at least a good fraction of the time anyway), you have to decide what people you want taking those jobs.
Is it better that all of the people with qualms refuse to take the positions? ie Do you want that work being done by people with no qualms? Because that sounds pretty terrible too.
Yes, this is the kicker for me. My country does not have such a record. If it did, the hypothetical quandry would still exist, but would be much diminished.
At one stage in my career I had an opportunity to go work for Betfair. I knew several people there and could bypass most or all of the interview process. At the time a rapidly growing on-line gambling company, wasn't quite the major company it is now. They were paying about half as much again over my existing salary, and technology wise it would have been a good opportunity.
I ended up having quite a long conversation with a few co-workers around the morality of it. I was against it, for what I thought were pretty much obvious reasons. The house always wins, gambling is an amazing con built up on destroying lives. I don't want to be a part of that, much like I wouldn't work for a tobacco company, oil company etc.
Co-workers were taking what they saw as more pragmatic perspective: Gamblers gonna gamble, doesn't matter if the site is there or not.
If you’re on a bad losing streak, they’ll send a host over to offer you tickets to a show or a buffet. The goal is to get you AWAY from gambling. They know you’re an addict, but want to keep the addiction going.
That’s where they cross the ethical line.
Unless you are an extreme pacifist (which is a perfectly reasonable thing to be), you'll acknowledge the legitimate need for the existence of an army in your country. In that case, the army better be equipped by missiles than by youths carrying bayonets. Then, there's nothing wrong in providing these missiles with technologically advanced guiding systems.
On the other hand, if I worked in "algorithmic trading" or fancy "financial instruments" I would not be able to sleep at night without a fair dose of drugs.
If they were for defense only, I might be able to do it. But instead they are sold to any government with the means to pay, regardless of their human rights record or how they will be used (e.g. Saudi). Aside from selling them on, they are used in conflicts that are hard to justify, beyond filling the coffers of the rich and powerful. Take the latest Iraq war for example: started based on falsified evidence, hundreds of thousands dead, atrocities carried out by the west, schools bombed, incidents covered up...
Given these realities, I just couldn't do it.
My original musing was more thinking along the lines of an ideal world, where I trusted my government; I'm still not sure I could do it.
The technical work was super interesting. Everyone I spoke to was plainly super sharp, and not morally bankrupt. I fielded similar moral concerns as you, but truthfully, I don't really have much of a personal ethical problem with it. I was a little more concerned at having to explain it to all of my friends, many of whom are substantially more liberal leaning in political views than I am.
Perception, and the pay cut I'd have to take from my current work, ended up being the major things that stopped me from taking it.
For one thing: it missed (or ignored, but I'll default to "missed") my point about the large pay cut involved.
For another: it stack ranked peer perception as being more important in my decisionmaking than the pay cut. My original comment certainly appears to value perception over pay. That's a huge miscommunication of my priorities, and my fault. I wasn't about to take a 30% pay cut. The fact that I'd also have to withstand the negative scrutiny of my friends and family just made it that much easier to decline.
For a third: we all hope to do work that we can be proud of. Part of that pride is to be able to hold up the fruits of our labor to others and take pride in having participated in it. I don't think I could have done that without thinking twice had I taken that job - and I'm not just talking about the security clearance angle here. Dealing with the negative reactions from my friends and family would have been a problem for me. Not the biggest one, but a problem. Acceptance is a big precursor to feeling safe, and self actualized. I feel the acceptance of a group of peers now. I do not want to trade that for more intellectually challenging work with an ethical component that my friends and family find questionable. I'd apply this reasoning in equal part to a role where I was paid more, but just as questionable. (I don't want to get rich building technology that enables the next Bernie Madoff, for example.)
Perhaps I'm of a weaker intellectual constitution than you are to be so easily influenced by the opinion of other people. However, I view that mental flexibility as a strength. I also trust the opinion (and, by extension, the underlying moral character) of my peers, who have been a positive moral force in my life. Hence, it was important to me not to compromise their trust and acceptance of me, and my career choices.
While it would undoubtedly be interesting from a technical standpoint, there is a serious moral conundrum - even if it was an ideal world where you trusted your government not to start wars based on flimsy or falsified evidence, start wars for profit, or sell weapons to less scrupulous governments.
Remember you don't get to take away everyone's missiles.
I take your point though, and I'd have much less of a dilemma if the missiles in question were not to be sold to other governments, and only to be used for domestic defence or a clear world-threat type scenario. Which for many Western countries is of course not going to happen.
Why? The more precise missiles are, the better. If no-one agreed to build missile guidance systems, we'd still have carpet bombing and artillery with 100m accuracy.
Right now, for example, Saudi Arabia is bombing Yemen with American-made bombs, and Turkey is using German tanks and Italian drones to grind Kurdish towns and villages into rubble in Syria.
That said, I think any software development which involves the government aren't fun at all for all the bureaucracies and inefficiency.
If all software is built only to solve the problem at hand, it will take less time to develop, be less likely to have bugs, and perform better.
It isn't clear that coding for reuse is going to get you a net win, especially since computing platforms, the actual hardware, is always evolving, such that reusing code some years later can become sub-optimal for that reason alone.
This is actually highly flexible, since cat(1) recognizes the “-“ argument to mean stdin, and so you can `cat a - b` in the middle of a pipeline to “wrap” the output of the previous stage in the contents of files a and b (which could contain e.g. a header and footer to assemble a valid SQL COPY statement from a CSV stream.)
This is the conceptual difference between
pipeline | cat # does nothing
pipeline | xargs cat # leverages cat's ability to open files
pipeline | cat # does nothing
dd if=./file | myprogram
dd if=./file | cat | myprogram
Also, if you have multiple data streams, by using e.g. explicit file descriptor redirection in your shell, ala
(baz | quux) >4
But it’s pretty simple to instead target the streams into explicit fifo files, and then concatenate those with cat(1).
I've been thinking about this more from the perspective of reusing code from cat than of using the cat binary in multiple contexts. Looking over the thread, it seems like I'm the odd one out here.
IISi’s lawyers claimed on September 7, 2010 that “Netezza secretly reverse engineered IISi’s Geospatial product by, inter alia, modifying the internal installation programs of the product and using dummy programs to access its binary code [ … ] to create what Netezza’s own personnel reffered to internally as a “hack” version of Geospatial that would run, albeit very imperfectly, on Netezza’s new TwinFin machine [ … ] Netezza then delivered this “hack” version of Geospatial to a U.S. Government customer (the Central Intelligence Agency) [ … ] According to Netezza’s records, the CIA accepted this “hack” of Geospatial on October 23, 2009, and put it into operation at that time.”
Reality is always more absurd, government agencies remain inept and corrupt even when shrouded in secrecy to cover up their missteps, and by the way, Kubernetes now flies on the F16.
I think one of big problems in software development is that nobody measures the half-life of our assumptions. That is the amount of time it takes for half of the original assumptions to no longer hold.
In my limited experience assumptions half-life in software could be easily as low as around one year. Meaning that in 5 years only 1/32 of original architecture would make sense if we do not evolve it.
It's not really kosher, but why not just keep around a fresh process that they can continually fork new handlers from?
I imagine the various long-running PHP node-ish async frameworks curse this history. Though PHP 7 cleaned up a lot of the leaks and inefficient memory structures.
My best guess is a security guarantee.
Long-running worker threads came a long time later, and were indeed intensely criticized from a security perspective at the time, given that they’d be one use-after-free away from exposing a previous user’s password to a new user. (FCGI/WSGI was criticized for the same reason, as compared to the “clean” fork+exec subprocess model of CGI.)
Note that in the context of longer-running connection-oriented protocols, servers are still built in the “accept(2) then fork(2)” model. Postgres forks a process for each connection, for example.
One lesser-thought-about benefit of the forking model, is that it allows the OS to “see” requests; and so to apply CPU/memory/IO quotas to them, that don’t leak over onto undue impacts on successive requests against the same worker. Also, the OOM killer will just kill a request, not the whole server.
In the old cgi-bin days, every web request would fork and exec a new script, whether PHP, Perl, C program, etc. That was replaced with Apache modules (or nsapi, etc), then later, with long running process pooled frameworks like fcgi, php-fpm, etc. Perl and PHP typically then didn't fork for every request. But did create a fresh interpreter context to be backward compatible, avoid memory leaks, etc. So there's still overhead, but not as heavy as fork/exec.
It's not great for maximizing performance but it's not 100s of milliseconds either, forking doesn't take long; what is slow is scripting languages loading their runtimes, but you can fork after that's loaded. If hardware is cheaper than opportunity cost of adding new features (rather than debugging leaks) it makes sense.
So forking alone doesn't cap performance too much; one or two cores could handle >1000 requests per second (billions per month).
I just can't stop laughing over this "ultimate in garbage collection". What a guy.
Btw we dealt a lot with Rational in the 90's. I might have even met him.
To be sure, it is a unique environment, in which you know for a fact that your software does not need to run beyond a certain point in time. And in a situation like that, I think it is OK to say that we have enough of some resource to reach that point in time. (It's sort of like admitting that climate change is real, and will end life on earth, but then counting on The Rapture to excuse not caring.) But that's not what's going on here. It sounds like they weren't really sure that there would definitely be enough memory.
Static or never-reclaimed allocations are common enough in embedded code.
> they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number.
The always-leak approach to memory management can also be used in short-lived application code. The D compiler once used this approach  (I'm not sure whether it still does).
(Not saying that the manufacturer was necessarily wrong in this case and doubling the memory might have added a tiny manufacturing cost to something that was much more expensive)
Nobody is claiming that this was done for reasons of good software design. It's perfectly reasonable to suspect it was done for reasons of cost or plain negligence.
There's a reason tech workers protest involvement of their firms with the military. It's because all too often arms are not used as a deterrent or as a means of absolute last resort, but because they are used due to faulty intelligence, public or political pressure, as a means of aggression, without regard to collateral damage or otherwise in a careless way.
The whole point here is the blase way the technician responded, "of course it leaks". The justification given is not that it was necessary for the design, but that it doesn't matter because it's going to explode at the end of its journey!
Garbage collection makes the performance of the code much less deterministic.
A lot of embedded loops running on embedded in-order cpus without an operating system use cycle count as a timing mechanism etc.
There exist options between no reclaim and using a garbage collector which could be considered, depending on the exact technical specifications of the hardware it was running on and the era in which it happened.
But retrofitting technical reasoning about why this may have been done is superfluous. The contractor already said why they did it, and the subtext of the original post is that it was flippant and hilarious.
"Cared only enough" is just your projection. The contractor knew the requirements, and satified the requirements with no waste of engineering time, and no risk of memory reclamation interfering with correct operation. The person complaining about leaks wasted both his time and the contractor's.
When your job is performing an analysis of the code, five minutes asking for a dangerous feature to be justified is ridiculously far from a "waste of time".
This whole procedure appears to be a bit unbelievable. And we're not even talking about code/system maintainability.
Not sure was it pun or no pun intended, but you gave me a good laugh.
Why? I could calculate the average amount of leaking of a program much easier than I could find all the leaks. Calculating just involves performing a typical run under valgrind and seeing how much was never freed. Do that N times and average. Finding the leaks is much more involved.
On the other hand, you can trivially calculate how many measurements you make per unit time, and multiply that by the size of the measurements to upper-bound your storage needs. Hypothetical example: you sample GPS coordinates 20 times per second, which works out to ~160 bytes/sec, 10000 bytes/min, or around 600KB for a full hour of flight. Easy to calculate - hard to fix.
Memory usage is discrete, not continuous. It's not as simple as calculating the safety factor on a rope.