Hacker News new | past | comments | ask | show | jobs | submit | jerf's comments login

"Inflatable" doesn't have to mean "tightly stretched rubber like a balloon" or "lowest bidder plastic heat-bonded together with a picture of a flamingo in it". You can build strong materials that will still inflate, especially with .5-1 atmospheres of pressure, no problem.

"The separation is odd anyway, what is the reasoning for wanting to sort by last/family name?"

Cultural assumptions that date from before the world being as small as it is, more than anything else.

HN is in English, and there's a bias towards stories of people from non-English areas integrating into English-speaking locales (which does not have to be "the US", much broader than that) because that's what gets noticed and told around here. But it flows in all directions and all cultures. I've read plenty of tales of woe of non-Japanese trying to live in Japan and integrate with their legal system and having various woes in the process based around their names not fitting into the legal system. And just like the West slowly gets better about accommodating Eastern names, it slowly gets better the other way around too.

It's a generalized culture-mismatch problem. I'll cop to being sort of the opinion that the "Xty-Billion Things Programmers Don't Know About {Names/Time/Addresses}" can end up being overly harsh as they are generally written, but at their core, they do have a point, and while we may read it in English and the examples the articles write about are about our more Western sensibilities impacting a broader world, every culture has that problem and could have their own equivalent documents written in their own language with their own examples (e.g. "not everyone in the world lists their family name first").


I do not know where I first saw it, but I saw someone observe that the shocking thing about Einstein's theory of relativity is not that everything is relative; that was understood reasonably well for a long time before him. What is shocking is that the speed of light in a vacuum, or the speed of causality, is a constant. You can derive huge swathes of relativity from that fact alone.

As a result, of all the speeds, when discussing the speed of light (which conventionally means "in a vacuum" unless otherwise mentioned), you can in fact ignore the question of reference frame, with the exception of you don't want to use a reference frame itself moving at the speed of light. But other than that, an article exclusively confining itself to the discussion of the speed of light in fact doesn't need to worry itself about the relativity of reference frames. For that speed alone, you can't level the complaint against it that it ignores the issue of different frames, because it uniquely doesn't matter.

(Relatedly: It is frequently given as the reason you can't reach or exceed the speed of light is some stuff about masses rising. While mathematically true in its own way, I think there's a cleaner reason to explain why you can't reach or exceed it, which is that you can't even get closer to it. No matter what you do, the speed of light is c. You accelerate to a thousand miles a second in some direction, and how much closer are you to c? The answer is, none. Light continues fleeing from you, in all directions, at c. There is no "get really close to c somehow and then just push yourself over really hard" because there is no "get close to c" in the first place. No matter how hard you accelerate, in what direction, in what order, in what manner, you not only can't get "close" to c, you can't even get closer. For similar reasons, in this paragraph, I don't need to qualify in which reference frame you go a thousand miles per second different than before, because it doesn't matter for c. You can't even get slightly closer to it, let alone "exceed" it somehow. It is an absolute.)


Please bear with me, but how is this true?

If we're in an airplane going 250 m/s - it also takes me 2 hours to fly to NYC. The air around the plane is windy, the air inside is still.

Now we're in a spaceship headed from the earth to the moon & it's going to take us 3 seconds. We've agreed ahead of time it takes 1.5 seconds for light to go from the earth to the moon. C is the speed at which light moves, and we're taking 2x the amount of time light would. The air inside is still. But we're still going fast.

How could it be that we are not approaching the speed of light?

What do you make of time dilation at higher speeds if we aren't approaching the speed of light at all, ever?


It's the stumbling block that everyone has grasping this stuff, and it's rooted in the old faulty kind of speed calculations. Speeds don't add like you think.

There are many ways to grasp this, and one has to try several of them to find an intuitive explanation that works for onesself; but one way is to consider that you, in your spaceship (as long as you are coasting along with no rockets firing, and aren't performing orbit/deorbit burns) are, in your frame of reference, at rest. Your speed is zero. You haven't approached anything at all, and light is still whizzing away from you at c. Indeed, it's Terra and Luna that are experiencing time dilation as far as you are concerned, because they are the ones with the high speeds.

* https://www.youtube.com/watch?v=Zkv8sW6y3sY (FloatHeadPhysics addressing this in another way: there are a other approaches still)

Accelerate at the beginning and end of your trip over to Luna, and of course general relativity comes into play and things get more complicated. What many gedankenexperiments get wrong is that usually there's only a short period of burning the rockets, in the real world. So for most of your trip in the rocket you aren't burning propellant and are in an inertial frame of reference. No inertial frame of reference approaches the speed of light/causality, by postulate 2 of special relativity, and one is always at speed zero in one's own inertial frame of reference.


That motion is relative was understood. That even notions like simultaneity also depend on reference frame I think was shocking. And I'm not really sure what it would mean for light to not be at rest in any reference frame but for it to not have the same speed in all reference frames. I think that would be even more shocking than light having a constant speed.

iirc Maxwell had already shown that the speed of light was constant. Einstein's quest was to explain htf that could be the case. The word Maxwell appears in the first sentence of the 1905 paper, and the first couple of pages are about what today we would call "causality".

We have the data to analyze the probability of a solar system like ours now. The observational limitations can be accounted for, both because we can understand what our solar system would look like at a distance, and there's enough other data from systems that don't match ours to be statistically significant.

The answer is that our solar system is exceedingly improbable. It is a top spinning stably for billions of years at a time, despite the general shape of it wanting to "fall down". It has several special characteristics in the shape of the outer planets that allow it to support what is in fact a very improbable situation with what we call the "terrestrial" planets being able to stay in the liquid-water zone for billions of years at a time. This turns out to be very abnormal.

It is "improbable"; it is not "inexplicable". There are a number of good and interesting computer simulations explaining how our solar system can come about. However, it is definitely unusual.

It turns out that there are indeed a lot of planets in the galaxy, however, the vast, vast majority of planetary systems end up completely unsuitable for life, visible even at the range we are at now. There's a few buckets they fall into, none of which look like our solar system.

The Copernican principle actually died with exoplanet research. It had a good run, but it turns out that, yes, our Solar System is in fact quite improbable and unusual, and the Milky Way is also improbable and unusual, with its very unusually-quiet central black hole not blasting the entire galaxy with life-ending radiation as is the case with almost all known galaxies. In general most of the galaxies in the universe have vastly, vastly more radiation than the Milky Way. We may not be in the center of our solar system, but it turns out the solar system itself is special, and the galaxy we are in is special.


"Not long ago the consultancy GameDiscoverCo released a study that most people were playing OLD games."

I've long tried not to buy every console, because it gets expensive for no good reason. So as our Switch is aging, I metaphorically poked my head up and put my finger to the wind... and decided our "next console" is the Steam Deck I already owned. And a big part of that decision is new games are frankly not any better than old games. They look better, and that's it, and that often comes at the cost of the real interactivity of the game anyhow.

I wouldn't put a specific date on it, but game tech basically plateaued 10-15 years ago, even if the numbers keep going up. The graphics were good enough, especially if strong art direction knew how to use them. The tech for creating great games was basically all in place, and we got to where having 10 times the polygons just wasn't important anymore. Games are a lot more like movies to me now... I don't sit there looking at "was this movie 2021 or 2023?" as if that's going to indicate an important difference in quality, and games are getting to be that way for me.


Minecraft has the distinction of being the only game I've played with my kids where I still felt like a parent even in the game. It wasn't even just policing "bad behavior", sometimes I had to run around covering over the ravines so they wouldn't fall in, because then I had to go rescue them. Now they're older and I still had to sometimes go rescue them after they made an ill-considered journey without supplying well enough in advance.

I'm not complaining per se. It's just interesting that the game is able to have that dynamic in it.

They're older now, it's much less of an issue, but I still mandate that PvP stays off at all times. Though it is a useful lesson in the mechanics of escalation, I suppose.

(Specifically, you always value harm to yourself more than you value harm to others, even your family, so when someone does "1" damage to you, you perceive it as "2", then you try to retaliate with "2" damage but they perceive it as "4" damage for the same reason, and now you're stuck in an escalating loop. There are ways out of this loop, of course, but it takes time. This rather simple model explains a rather distressing amount of international politics and history....)


Knowing a couple commands, and having them switched on, can be helpful for the ravine issue. The /tp command can be used to teleport a player (your kid) to where your player is.

My nephews of course would use this against me. I’d get away into a calm area and they’d teleport me right into the middle of hell.


As AIs improve, they won't even need CSAM or fetish content in their training set. Explaining what they are in a handful of words using normal English is not that difficult. Users would trade prompts freely. As windows grow, you'll be able to stick more info in them.

And as I like to remind people, LLMs are not "AI", in the sense that they are not the last word in AI. Better is coming. I don't know when; could be next month, could be 15 years, but we're going to get AIs that "know" things in some more direct and less "technically just a very high probability guess" way.


What everyone needs to know about LLMs is that they do not perform objectivity.

An LLM does not work with categories: it stumbles blindly around a graph of tokens that usually happens to align with real semantic structures. It's like a coloring book: we perceive the lines, and the space between them, to be true representation, but that is a feature of human perception: it does not exist on the page itself.


I am reminded of how Postel's Law has fallen from Iron Law of the Internet to somewhat distasteful in the past 5-10 years. Yes, it's no fun to break your clients and customers today. But if you don't do it today, and you don't do it next month, next year, or the next four years, you'll find yourself in a position where you can't do anything anymore because you've basically let Hyrum's Law "build up" in your system until there's no room to move.

Obviously, one should not willy-nilly break customers for no reason. The point is not to create a management metric around Number Of Times We've Broken The Customer and reward the team for getting their numbers up, up, up. The point is to retain flexibility and the ability to move forward, and that is done through intelligent design up front to the extent possible, and carefully selecting when to break customers over the course of years. It's always expensive and should definitely be treated as an expense, not a benefit on its own terms. But as the Hyrum's Law "builds up" its expense to the company over all will in not really all that long overwhelm the costs of the occasional breakage to stay up-to-date with the world.


I think it's a lot of little things. There's a lot of people very motivated to keep presenting not just AI in general, but the AI we have in hand right now as the next big thing. We've got literally trillions of dollars of wealth tied up in that being maintained right now. It's a great news article to get eyeballs in an attention economy. The prospect of the monetary savings has the asset-owning class salivating.

But I think a more subtle, harder-to-see aspect, that may well be bigger than all those forces, is a general underestimation of how often the problem is knowing what to do rather than how. "How" factors in, certainly, in various complicated ways. But "what" is the complicated thing.

And I suspect that's what will actually gas out this current AI binge. It isn't just that they don't know "what"... it's that they can in many cases make it harder to learn "what" because the user is so busy with "how". That classic movie quote "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should" may take on a new dimension of meaning in an AI era. You were so concerned with how to do the task and letting the computer do all the thinking you didn't consider whether that's what you should be doing at all.

Also, I'm sure a lot of people will read this as me claiming AI can't learn what to do. Actually, no, I don't claim that. I'm talking about the humans here. Even if AI can get better at "what", if humans get too used to not thinking about it and don't even use the AI tool properly, AI is a long way from being able to fill in that deficit.


The historically-ironic thing to me is that Erlang/BEAM brushed up against the idea and just didn't quite get it. What's important to the properties that Erlang maintains is that actors can't reach out and directly modify other actor's values. You have to send messages. It is sufficient to maintain this properly that you can't send references in messages, and it is sufficient to maintain that property to simply not have references, which Erlang and BEAM do not. Full immutability is sufficient but not necessary.

Erlang was a hair's breadth away from having mutation contained within the actor's variable space, with no external mutation, which for the time would have been quite revolutionary. Certainly Rust's mutation control is much richer, but Rust came a lot later, and at least based on its current compile performance, wasn't even on the table in the late 1990s.

But the sort of understanding of mutability explained in the original post was not generally understood. Immutability was not a brand new concept chronologically, but if you define the newness of a computer science concept as the integration of its usage over time, it was still pretty new by that metric; it had been bouncing around the literature for a long time but there weren't very many programming languages that used it at the time. (And especially if you prorate "languages" by "how easy it is to write practical programs".)

Elixir does a reasonable job of recovering it from the programmer's perspective, but I think an Erlang/BEAM that just embraced mutability within an actor probably would have done incrementally better in the programming language market.


I think you're right that "interior immutability" of actors isn't really necessary to the programming model that you get from requiring message passing between actors.

However, interior immutability is not without its benefits. It enables a very simple GC. GC is easily done per-actor because each actor has independent, exclusive, access to its own memory. But the per-actor GC is very simple because all references are necessarily backwards in time, because there's no way to update a reference. With this, it's very simple to make a copying GC that copies any active references in order; there's no need for loop checking, because loops are structurally impossible.

I don't know that this was the intent of requiring immutability, but it's a nice result that pops out. Today, maybe you could pull in an advanced GC from somewhere else that already successfully manages mutable data, but these were not always available.

Of course, it should be noted that BEAM isn't entirely immutable. Sometimes it mutates things when it knows it can get away with it; I believe tuples can be updated in some circumstances when it's clear the old tuple would not be used after the new one is created. The process dictionary is direct mutable data. And BIFs, NIFs, and drivers aren't held to strict immutability rules either, ets has interior mutability, for example.


"Of course, it should be noted that BEAM isn't entirely immutable."

Mutability is relative to the layer you're looking at. BEAM is, of course, completely mutable from top to bottom because it is constantly mutating RAM, except, of course, that's not really a helpful way of looking at it, because at the layer of abstraction you program at values are immutable. Mutatable programs can be written in terms of immutable abstractions with a well-known at-most O(n log n) penalty, and immutable programs can be written on a mutable substrate by being very careful never to visibly violate the abstraction of immutability, which is good since there is (effectively for the purposes of this conversation) no such thing as "immutable RAM". (That is, yes, I'm aware of WORM as a category of storage, but it's not what this conversation is about.)


It took me a long time when implementing Norvig's Sudoku solver to realize those one-way pointers were going to force me to do something very different to implement this code with immutability.

Norvig's secret sauce is having 3 different projections of the same data, and that involves making updates in one part of a graph that are visible from three entry points.

I'm sure there are other solutions but mine didn't settle down until I started treating the three views as 3 sets of cell coordinates instead of lists of cells.


IIRC, Rust's idea of controlled mutability originally came directly from the Erlang idea of immutable messages between tasks. Certainly, in the classic "Project Servo" presentation [0], we can see that "no shared mutable state" refers specifically to sharing between different tasks. I think it was pretty early on in the project that the idea evolved into the fine-grained aliasing rules. Meanwhile, the lightweight tasks stuck around until soon before 1.0 [1], when they were abandoned in the standard library, to be later reintroduced by the async runtimes.

[0] http://venge.net/graydon/talks/intro-talk-2.pdf

[1] https://rust-lang.github.io/rfcs/0230-remove-runtime.html


I have mixed feelings about rust’s async story, but it is really nice having good historical documentation like this.

Thanks for the links!


> What's important to the properties that Erlang maintains is that actors can't reach out and directly modify other actor's values. You have to send messages.

I just cannot make this mental leap for whatever reason.

How does 'directly modify' relate to immutability? (I was sold the lie about using setters in OO a while back, which is also a way to prevent direct modification.)


So, this is something I think we've learned since the 1990s as a community, and, well, it's still not widely understood but: The core reason mutability is bad is not the mutation, it is "unexpected" mutation. I scare quote that, because that word is doing a lot of heavy lifting, and I will not exactly 100% nail down what that means in this post, but bear with me and give me some grace.

From a the perspective of "mutability", how dangerous is this Python code?

    x = 1
    x = 2
    print(x)
Normally little snippets like this should be understood as distilled examples of a general trend, but in this case I mean literally three lines. And the answer is, obviously, not at all. At least from the perspective of understanding what is going on. A later programmer reading this probably has questions about why the code is written that way, but the what is well in hand.

As the distance between the two assignments scales up, it becomes progressively more difficult to understand the what. Probably everyone who has been in the field for a few years has at some point encountered the Big Ball Of Mud function, that just goes on and on, assigning to this and assigning to that and rewriting variables with wild abandon. Mutability makes the "what" of such functions harder.

Progressing up, consider:

    x = [1]
    someFunction(x)
    print(x)
In Python, the list is mutable; if someFunction appends to it, it will be mutated. Now to understand the "what" of this code you have to follow in to someFunction. In an immutable language you don't. You still need to know what is coming out of it, of course, but you can look at that code and know it prints "[1]".

However, this is still at least all in one process. As code scales up, mutation does make things harder to understand, and it can become hard enough to render the entire code base pathologically difficult to understand, but at least it's not as bad as this next thing.

Concurrency is when mutation just blows up and becomes impossible for humans to deal with. Consider:

   x = [1]
   print(x)
In a concurrent environment where another thread may be mutating x, the answer to the question "what does the print actually print?" is "Well, anything, really." If another thread can reach in and "directly" mutate x, at nondeterministic points in your code's execution, well, my personal assertion is nobody can work that way in practice. How do you work with a programming language where the previous code example could do anything, and it will do it nondeterministically? You can't. You need to do something to contain the mutability.

The Erlang solution is, there is literally no way to express one actor reaching in to another actor's space and changing something. In Python, the x was a mutable reference that could be passed around to multiple threads, and they all could take a crack at mutating it, and they'd all see each other's mutations. In languages with pointers, you can do that by sharing pointers; every thread with a pointer has the ability to write through the pointer and the result is visible to all users. There's no way to do that in Erlang. You can't express "here's the address of this integer" or "here's a reference to this integer" or anything like that. You can only send concrete terms between actors.

Erlang pairs this with all values being immutable. (Elixir, sitting on top of BEAM, also has immutable values, they just allow rebinding variables to soften the inconvenience, but under the hood, everything's still immutable.) But this is overkill. It would be fine for an Erlang actor to be able to do the equivalent of the first example I wrote, as long as nobody else could come in and change the variable unexpectedly before the print runs. Erlang actors tend to end up being relatively small, too, so it isn't even all that hard to avoid having thousands of variables in a single context. A lot of Erlang actors have a dozen or two variables tops, being modified in very stereotypical manners through the gen_* interfaces, so having in-actor truly mutable variables would probably have made the language generally easier to understand and code in.

In the case of OO, the "direct mutation" problem is related to the fact that you don't have these actor barriers within the system, so as a system scales up, this thing "way over there" can end up modifying an object's value, and it becomes very difficult over time to deal with the fact that when you operate that way, the responsibility for maintaining the properties of an object is distributed over the entire program. Technically, though, I wouldn't necessarily chalk this up to "mutability"; even in an immutable environment distributing responsibility for maintaining an object's properties over the entire program is both possible and a bad idea. You can well-encapsulated mutation-based objects and poorly-encapsulated immutable values. I'd concede the latter is harder than the former, as the affordances of an imperative system seems to beg you to make that mistake, but it's certainly possible to accidentally distribute responsibilities incorrectly in an immutable system; immutability is certainly not a superset of encapsulation or anything like that. So I'd class that as part of what I mentioned in this post before I mentioned concurrency. The sheer size of a complex mutation-based program can make it too hard to track what is happening where and why.

Once you get used to writing idiomatic Erlang programs, you contain that complexity by writing focused actors. This is more feasible than anyone who hasn't tried thinks, and is one of the big lessons of Erlang that anyone could stand to learn. It is then also relatively easy to take this lesson back to your other programming languages and start writing more self-contained things, either actors running in their own thread, or even "actors" that don't get their own thread but still are much more isolated and don't run on the assumption that they can reach out and directly mutate other things willy-nilly. It can be learned as a lesson on its own, but I think one of the reasons that learning a number of languages to some fluency is helpful is that these sorts of lessons can be learned much more quickly when you work in a language that forces you to work in some way you're not used to.


I've run into something like this when working on embedded systems using OO. You have persistent mutable data, you have an object that encapsulates some data, you have multiple sources of control that each have their own threads that can modify that data, and you have consistency relationships that have to be maintained within that data.

The way you deal with that is, you have the object defend the consistency of the data that it controls. You have some kind of a mutex so that, when some thread is messing with certain data, no other thread can execute functions that mess with that data. They have to wait until the first thread is done, and then they can proceed to do their own operations on that data.

This has the advantage that it puts the data and the protection for the data in the same place. Something "way over there" can still call the function, but it will block until it's safe for it to modify the data.

(You don't put semaphores around all data. You think carefully about which data can be changed by multiple threads, and what consistency relationships that could violate, and you put them where you need to.)

Is that better or worse than Erlang's approach? Both, probably, depending on the details of what you're doing.


That's possibly what I meant by the "actors that don't get their own thread" at the very end. I've switched away from Erlang and write most of my stuff in Go now, and while I use quite a few legit "actors" in Go, that have their own goroutine, I also have an awful lot of things that are basically "actors" in that they have what is effectively the same isolation, the same responsibilities, the same essential design, except they don't actually need their own control thread. In Erlang you often just give them one anyhow because it's the way the entire language, library, and architecture is set up anyhow, but in Go I don't have to and I don't. They architecturally have "one big lock around this entire functional module" and sort of "borrow" the running thread of whoever is calling them, while attaining the vast majority of benefits of an actor in their design and use.

If you have an "actor", that never does anything on its own due to a timer or some other external action, that you never have to have a conversation with but are interacting with strictly with request-response and aren't making the mistake someone discussed here [1], then you can pretty much just do a One Big Lock and call it a day.

I do strictly follow the rule that no bit of code ever has more than one lock taken at a time. The easiest way to deal with the dangers of taking multiple locks is to not. Fortunately I do not deal in a performance space where I have no choice but to take multiple locks for some reason. Though you can get a long way on this rule, and building in more communication rather than locking.

[1]: https://news.ycombinator.com/item?id=41722440


> You don't put semaphores around all data

You're talking about putting semaphores around code.

Locking data, not code, is a great way to do things. It composes and you don't run into too-many-locks, too-few-locks, forgetting-to-take-a-lock, or deadlocking problems.

https://www.adit.io/posts/2013-05-15-Locks,-Actors,-And-STM-...


> In the case of OO, the "direct mutation" problem is related to the fact that you don't have these actor barriers

Right, the OO guys said to use "encapsulation" rather than direct mutation.

> so as a system scales up, this thing "way over there" can end up modifying an object's value

Can you not send a message way over there?


You're correct that any process can more or less send a message to any other process, but the difference is what guarantees the Erlang runtime provides around that idea.

For example, in Erlang, if I have processes A, B, and C, and B and C both send messages to A at the same time, the runtime guarantees that A processes the messages one at a time, in order, before moving on to the next message (there is some more detail here but it is not important to the point).

The runtime guarantees that from A's perspective, the messages from B and C cannot arrive "simultaneously" and trample on each other. The runtime also guarantees that A cannot process both messages at the same time. It processes the messages one at a time. All code in A is run linearly, single-threaded. The VM takes care of scheduling all of these single-threaded processes to run on the same hardware in parallel.

As other posters have pointed out, the runtime also guarantees that B and C cannot reach in and observe A's raw memory in an uncontrolled fashion (like you could in C, Java, etc.), so B and C cannot observe any intermediate states of A. The only way for B and C to get any information out of A is to send a message to A and then A can send a reply, if it wants. These replies are just normal messages, so they also obey all of the guarantees I've already described, so A will send the replies one at time, and they will end up in the mailboxes of B and C for their own processing.

Given all this (and more which I haven't gone into), Erlang doesn't have the concept of a data race where 2 or more threads are concurrently accessing the same memory region, as you might have in say, the C language (note that this is different than a logical race condition, which Erlang of course still can have).

I hope this is useful, you're asking good questions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: