I agree with this, especially when you consider that it is us, the taxpayer that is largely subsidizing musk to build out this toll booth.
It is not crazy to believe that in the medium term that SpaceX will be one of the only viable ways for America to access space. In that position SpaceX will have a lot of leverage and ability to extract concessions from anyone wanting to go to space.
Why my tax dollars should go to help the worlds richest man build a toll booth between earth and space is beyond me.
It may seem a bit trite, but the answer right now is 'because SpaceX is actually doing it'. Space was in a massive rut. SpaceX is breaking that rut and I don't begrudge a single penny that has gone towards that. But a core issue that successful people have is that they assume that because they had success in x they are right in everything else. There is good and bad with Musk, as there is with everyone. I personally would like to kick start the conversation about how cheap access to space can be about society and not about money and personality. We are on the cusp of truly being a space based species and we are just letting it happen instead of guiding it. Now that the technology is rapidly progressing again, it is time to have the conversation about society and space and not just how cool the tech is.
Sideshows are the dumbest thing. I live in an area where people do sideshows. Midnight, a dozen or so cars and then a hundred or more random people will show up and make a racket squealing around. More than once they’ve crashed and caused injuries and property damage.
For what? Even if they succeed in not destroying their vehicles, other innocent people’s property, or maim someone it’s a loud pointless nothing. It’s not a competition, it’s just being loud and smells awful. I don’t even get how it’s a show, it’s literally just cars doing donuts… how many times do you have to see that before it’s boring, once? Who goes to these things
For anyone mystified about what a NIF is that doesn't want to go read the docs.
The BEAM VM (which is the thing that runs erlang / elixir / gleam / etc) has 3 flavors of functions.
- BIFs - Built-in functions, these are written in C and ship with the VM
- NIFs - Natively implemented functions, these are written in any language that can speak the NIF ABI that BEAM exposes and allows you to provide a function that looks like a built-in function but that you build yourself.
- User - User functions are written in the language that's running on BEAM, so if you write a function in erlang or elixir, that's a user function.
NIFs allow you to drop down into a lower level language and extend the VM. Originally most NIFs were written in C, but now a lot more languages have built out nice facilities for writing NIFs. Rust has Rustler and Zig now has Zigler, although people have been writing zig nifs for a while without zigler and I'm sure people wrote rust nifs without rustler.
It’s important to note that while Erlang has protections against user code crashing an Erlang process and recovering, a faulty NIF can take down the entire virtual machine.
There's a series of things that a NIF must do to be a good citizen. Not crashing is a big one, but also not starving the VM by never yielding (in case the NIF is long-running) is important, plus a few secondary things like using the BEAM allocator so that tooling that monitors memory consumption can see resources consumed by the NIF.
The creator of Zigler has a talk from ElixirConf 2021 on how he made Zig NIFs behave nicely:
I don't think this is right. The process will crash, and the Supervision strategy you are using will determine what happens from there. This is what the BEAM is all about. The thing with NIFs is that they can crash the entire VM if they error.
Erlang's (Elixirs) error management approach is actually "Let it crash"
This is based on the acknowledgment that if you have a large number of longer running processes at some point something will crash anyway, so you may quite as well be good at managing crashes ;-)
Yes, but that's not Rust's error management strategy. Most Rust code isn't written with recovery from panics in mind, so it can have unintended consequences if you catch panics and then retry.
How so? The whole point of unwinding is to gracefully clear up on panics, how did it peak for you?
It's also not like there is much of a choice here. Unwinding across FFI boundaries (e.g. out of the NIF call) is undefined behaviour, so the only other option is aborting on panics.
It’s pretty common in the Elixir ecosystem for these types of libraries to not change very much. Elixir itself doesn’t change too much so these libraries stay solid without needing frequent updates. It doesn’t mean people aren’t using them. Some libraries even put disclaimers that they are actively maintained even if they haven’t seen an update in a long time. It’s something that takes some getting used to for some people (including myself at one point).
I will second this. I've been using multiple libraries in our production Elixir app that haven't been updated in the last five years. Elixir itself was declared as "stable" feature-wise years ago. It may be argued that the type system being introduced is not in-keeping with that, but not sure. Jose is a very cautious and diligent "benevolent dictator" and you get a lot of backward compatibility guarantees. Erlang is the same. Compared to what some people might be used to with churn in Node/React etc it is apples and oranges.
The semantics can certainly be argued, but a type system is sort of on its own tier of as far as language features go. Most importantly, there is only going to be one backward incompatible change which is the spec syntax, otherwise it is just leveraging how we already write Elixir.
Yes, I'm not worried about it. I've not been following it as closely as I'd like, but from what I've read the core team seems to be taking a very measured incremental approach with the type system.
> It’s pretty common in the Elixir ecosystem for these types of libraries to not change very much.
This is kind of fascinating and seems worthy of more detailed study. I'm sure almost anything looks stable compared to javascript/python ecosystems, but would be interesting to see how other ecosystems with venerable old web-frameworks or solid old compression libraries compare. But on further reflection.. language metrics like "popularity" are also in danger of just quantifying the churn that it takes to keep working stuff working. You can't even measure strictly new projects and hope that helps, because new projects may be a reaction to perceived need to replace other stuff that's annoyingly unstable over periods of 5-10 years, etc.
Some churn is introduced by trying to keep up with a changing language, standard lib, or other dependencies, but some is just adding features forever or endlessly refactoring aesthetics under different management. Makes me wish for a project badge to indicate a commitment like finished-except-for-bugfixes.
Erlang (and friends) are built with a goal of stability. Operational stability is part of that, but it also comes into play with code and architectural stability.
Maybe it's the functionalness, maybe it's the problem domains, but a lot of the modules have clear boundaries and end up with pretty small modules where the libraries end up having a clear scope and a small code base that moves towards being obviously correct and good for most and then doesn't have much changes after that. It might not work for everyone, but most modules don't end up with lots of options to support all the possible use cases.
The underlying bits of OTP don't tend to churn too much either, so old code usually continues to work, unless you managed to have a dependency on something that had a big change. I recall dealing with some changes in timekeeping and random sources, but otherwise I don't remember having to change my Erlang code for OTP updates.
It helps that the OTP team is supporting several major versions (annual releases) simultaneously, so if there's a lot of unneccessary change, that makes their job harder as well as everyone else's.
That is not what I meant. I looked at sorted_set_nif which doesn't seem to compile on OTP 26 (we're at 27 now), and fastglobal which has a very old PR with 3 approvals has not been merged. Elixir libraries may not change _much_ but core libraries like telemetry, Ecto, ExDoc, Jason, still get either minor or patch releases all the time.
If libraries get regular updates even if they are minor, it indicates they are in use. If they have inactive repositories and low hex.pm download numbers, they may have been abandoned which can mean you have to maintain it yourself in the future, or the people behind the library found it's not such a good idea after all. This doesn't have to be the case, which is why I asked.
Ah ya, I do see how the optics of this could give off that impression. I don't use this library myself, but the issue is with Elixir 1.15.7 & OPT 26.1.26 which is VERY different than "It doesn't work on OTP 26." Certain patch versions of Elixir and OTP have caused problems before (sorry, I don't have a citation) and this particular issue looks like it's related to dependencies not syncing up on the config change?
I do think more libraries should give that little "We're still maintained" notice as people not totally ingrained in this might not realize. To some, the fact that there have been no issues reported now that we're on OPT 27 and Elixir 17 would be an indicator that all is well.
Rustler wasn't properly forward compatible (only with regard to the build process, a compiled library will work just fine on any newer OTP) until 0.29. They are using 0.22, upgrading Rustler will be enough to get rid of this issue for all future OTP versions.
Thank you for the full story here as I just gave the issue a cursory glance. As someone quite ingrained in Elixir, I see an issue referencing specific patch versions of Elixir and OTP and immediately understand it's very specifically targeting that specific Elixir/OTP combo. But depr brings up a good point that not everyone is immediately going to understand this, especially newcommers to the language and it’s generally hard not to just read the headline.
Yeah I was trying to explain this to another developer that packages end up being “finished” eventually and seem to continue to work exceptionally well without updates for a really long time.
Something about immutability and the structure of Elixir leads to surprisingly few bugs.
Is any of this code open source? As an outsider, I'm kind of at a loss for why anyone wants this or what you kids are doing over there and how offended I should be by it.
TL;DR: Erlang/Elixir/etc are high level languages and the virtual machine they run on, the BEAM, is optimized for speedy IO but is not so great when it comes to intensive CPU tasks. You'll want to write the latter in a good systems language which is what libraries like this provide (you get C bindings out of the box, I believe).
It's also important to point out ports, because as you mention, NIFs are a way to integrate external code. But as someone else points out, NIFs can crash the entire BEAM VM. Ports are a safer way to integrate external code because they are just another BEAM process that talks to an external program. If that program crashes, then the port process crashes just like any other BEAM process but it won't crash the entire BEAM VM.
And then there are port drivers which are the worst of both worlds! Can crash the BEAM and need much more ceremony than NIF to set up but they’re pretty nice to do in Zig[1] as well
There's another option and that's setting up an Erlang node in the other language. The Erlang term format is relatively straightforward. But I'm honestly not sure of the benefit of a node versus just using a port.
The Erlang term format is straightforward, but if you want to set up another node in another language you need to correctly implement/emulate process linking, binaries, and some other stuff too, it's not just a matter of writing a socket to accept and emit Erlang terms.
It's not impossibly large but it's not something one does on a lark either; if there isn't support in your language already it's hard to justify this over any of the many, many message busses supported by both Erlang and other languages that don't have so many requirements.
NIFs are great for things that really feel like a relatively quick function call.
If you've got some mathematical/crypto function, chances are you don't want that to go through a command queue to an external port, because that's too much overhead. If it's a many round crypto function like bcrypt or something, you do need to be a bit careful doing it as a NIF because of runtime. But you wouldn't want to put a sha256 through an external program and have to pass all that data to it, etc.
Something that you might actually want queueing for and is likely to have potential for memory unsafety like say transcoding with ffmpeg, would be a good fit as an external Port rather than a NIF or a linked in Port driver.
Ports are generally great, but you are running multiple apps and communicating between them using STDIN/STDOUT etc. There are certain corner cases where they might not be suitable. I had been using an OPCUA library where the logging had to be turned off because otherwise it was sending the logs back to our Elixir app and we were expecting Elixir terms. Also the shutdown of the remote end of a port can stop the data getting back to Elixir. There are ways around all of this but it's slightly annoying. In general though, ports work 80% of the time and are really convenient.
Yeap, this is a big one. In Nx we have some facilities for doing zero-copy stuff that only really work if you have, say, Evision and EXLA running on the same OS process.
We do have IPC handles that could enable this over, say, ports, but then there's a whole other discussion on pointers vs ipc handles
Do nifs have the equal process time stuff that regular elixir processes have? Where the BEAM will move the scheduler into another process if it's taking too long?
Forgive me if I'm mixing up my terminology it's been a bit since I have poked at Elixir.
BEAM can't preempt native code, that's why NIFs should either be fast/low-latency to not excessively block the scheduler or be put in what's called a dirty scheduler which just means to run it in a separate thread.
Nope, at least not by default or like one would expect from pure Erlang (when it comes to preempting). Been a while since I dug into this admittedly but I write Elixir daily for work (and have for about ten years now). They don’t do the record keeping necessary for the BEAM to interrupt. You need to make sure the “dirty scheduler” is enabled or you can end up blocking other processes on the same scheduler.
The fact that they are having to sell these shares to try to right the ship means the market decided. Otherwise they could just keep extracting as much profit as possible.
Oddly enough people that fly on planes want to be confident that the planes will work.
Exactly this. As someone that’s served as an oncall engineer for years now, the skills you need to operate a cluster are completely different from the things leetcode tests for.
There are thousands, and I’m not exaggerating, literally thousands of leetcode problems.
A lot of them require you to make, what to me at least, is some non-obvious clever observation about the problem. Sure the problem is talking about a guy robbing houses, but if you stand on your head and squint just right you’ll realize this is actually a graph cycle detection problem and you should use Floyd’s algorithm to solve it.
Because there are thousands of these problems the amount of time it would take someone to become familiar with them is prohibitive. So you are at the mercy of the interviewer, have they picked a super clever one, are they going to be ok with removing duplicates from the answer by tossing stuff in a set or do they want you to pull some dynamic programming out of thin air.
It’s the part where you have to divine the trick under pressure that measures nothing of value. I’ve been a professional software engineer for 2 decades. I’ve had times when I’ve been trying to solve some very tricky problem and done research and thought thoughts and come up with pretty clever solutions, or at least I think they are clever. Not once have I had to do this under pressure in a 45 minute time box with someone looking over my shoulder.
That’s my objection to leetcode. Sure it’s great if a candidate can recognize that your riddle about topological map rain capture is actually just a restatement of Kolger’s postulate at first glance (a problem and postulate I’ve just made up because I’m not going to wade back into leetcode right now) but that’s an insane thing to optimize for.
The vast majority of the problems programmers solve are actually just mapping business domains into code. The most common problems that need solving is taking squishy, incomplete, and contradictory requirements from multiple stakeholders and figuring out what needs to get done. People in the real world are rarely rolling their own data structures, because the red black tree you slap together is going to be infinitely worse than the battle tested highly optimized one you can pull out of the standard library or off your package manager.
In my long career I’ve had a handful of occasions to actually build a data structure or solve a problem with some very clever algorithm. And in those cases you don’t really want people shooting from the hip anyways, you would want them to do research and see what prior art exists so they can discover something like Floyd’s algorithm for finding cycles in a unidirectional graph (ok this one is real).
It is not clear to me what exactly leetcode tests. My best guess would be your ability to take a disguised questions and convert it into a handful of problem shapes and solve those. But if you grade leetcode like the website does during an interview, expect to lose a lot of perfectly fine candidates along the way.
I saw a video from people in the car. The screen allowed them to pick between 2 different destinations. It’s not like they could just punch in any old address.
Seems like the food producer would have a clear incentive to mark their products with sell by dates.
If I’m making a food product for sale at retail and I can mark it “sell by” some date, consumers get confused and think it’s no good after that date, the store that buys my food product will not want to keep it on the shelf because consumers won’t buy it.
The retailer has to discard the perfectly fine item and reorder from… me the food product producer and then I make more money.
I imagine the food retailers are happy about this and the food manufacturers are probably unhappy.
I think you can come up with a lot of scenarios in which any one person is happy or not happy. But in complicated systems, it's impossible to know exactly and you have to rely on market forces to work these things out. The problem is people coming in without any humility and trying to tinker with the system and getting shocked when there is unexpected consequences.
I read this many years ago. I do think for being written in 2003 it's interesting to read again now and see in the last 20 years what parts seem more or less plausible.
I think the nice thing about science fiction is that even bad science fiction can contain within interesting ideas. Sorta like pizza, even bad pizza is pretty good.
I think the person is concerned with client-side compute, not just server-side compute. The article does not mention whether zstd has additional decompression overhead compared to zlib.
Client-side compute may sound like a contrived issue, but Discord runs on a wide variety of devices. Many of these devices are not necessarily the latest flagship smartphones, or a computer with a recent CPU.
I am going to guess that zstd decompression is roughly as expensive as zlib, since (de)compression time was a motivating factor in the development of zstd. Also the reason to prefer zstd over xz, despite the latter providing better compression efficiency.
though I always thought lz4 to be the sweet spot for anything requiring speed, somewhat less compression ratio in exchange for very fast compression and decompression
It is not crazy to believe that in the medium term that SpaceX will be one of the only viable ways for America to access space. In that position SpaceX will have a lot of leverage and ability to extract concessions from anyone wanting to go to space.
Why my tax dollars should go to help the worlds richest man build a toll booth between earth and space is beyond me.