> That's pretty obviously not going to be viable with any existing technology.
What Musk understands is that technology is a moving target. You shoot for where it will be, not where it is now. Sometimes he's overoptimistic, however a robotaxi seems well timed.
Have you seen the GPT-4o videos? I don't know how anyone can watch those and believe that vision e2e self-driving vehicles are somehow a long way off.
It takes years to manufacture a new vehicle type. When the Cybertruck was announced the SOTA in language models was GPT 2. Now we have multimodal conversational models with vision.
By the time the robotaxi rolls off the production line, the software will be ready. And if the software is ready, then the model of owning cars will be disrupted for many people living in dense urban environments.
The real risk is in not preparing for that eventuality, which is the same type of unpreparedness risk that the legacy launch industry is facing with Starship.
IANAP but it seems that fundamental physics suffers from a lack of monotonicity of knowledge. Although physics does its best to explain things, those explanations are more like guesses than known facts. A theoretical physicist can have their life's work undone simply because someone else comes up with a better guess, or experiment says no. You spend your life working on SUSY and then... nope. Even very established knowledge can be overturned.
People will say "that's science" and indeed that's fundamental physics, but other fields don't really work like that.
In chemistry and biology, certainty isn't in such short supply. Nobody is asking "but is DNA a double helix?" Researchers take a problem, they attack it, then they publish the results, it gets replicated (or not), and the set of knowledge grows.
Mathematics is more similar to chemistry and biology insofar as mathematical knowledge takes the form of an ever-growing set of proven facts generated by research. Take a problem, prove it, other mathematicians check it, the set of knowledge grows.
Fundamental physics has issues because the "check" stage now often costs millions or billions of dollars (build a particle accelerator, neutrino detector, gravitational wave detector, satellite, etc), and even then it might not give a definitive answer. Just look at the g-2 situation where they notice a discrepancy, they spend millions of dollars trying to determine if this single discrepancy is real, and then someone publishes a paper "haha I recalculated it, you just wasted your time".
Not a criticism of fundamental physics because clearly that's just how it is. I'd rather have guesses than ignorance. The gravitational wave research seems to be doing okay at least.
g-2. g minus 2. g is the magnetic moment of an electron. It is expected to be very close to 2. g minus 2 is a value that can be measured, and that can be calculated, both very precisely.
If I understand the current situation, for electrons g-2 agrees between experiment and measurement to 10 digits. For muons, though, it doesn't. (Muons are harder to measure, because they decay. And they are somewhat less well understood theoretically, so there's room on both sides of that question.)
> If you allow arbitrary values, what's the difference between a record and a frozen object?
The behaviour of equality. Frozen objects are already considered to have unique identities, in that `Object.freeze({}) !== Object.freeze({})` even though both objects are otherwise indistinguishable. This behaviour can't be changed and it relates to the fact that `Object.freeze(a) === a`.
> I thought that the whole point is to have guaranteed deep immutability
Not really. The whole point apparently according to most people[0] is to have composite values that don't have unique identities, so they fit in with all the existing comparison operations (eg, `===`, `Map`, `indexOf`, `includes`) just as you can do with strings.
Immutability is a prerequisite for this, since if `a` and `b` are mutable, mutating `a` might be different to mutating `b`. Thinking again about strings, equality works because strings are immutable:
const foo = "foo", bar = "bar";
const a = foo + bar;
const b = foo + bar;
a === b; // true
Implementations will typically use different underlying memory allocations for these strings[1], but at a language level they are considered to be the same value. If it were possible to modify one of the strings (but not the other) using `a[0] = "x";` it would mean `a` and `b` are not equivalent so should not be considered equal.
As explained here[2], deep immutability is not necessary for this behaviour.
In my opinion guaranteed "deep immutability" is not generally useful/meaningful (if you have a particular use case, feel free to share it). In theory it's not possible to enforce "deep immutability" because someone can always refer to something mutable, whether that's an object reference or a number indexing a mutable array.
If you really do want something that guarantees a certain notion of "deep immutability", this concept seems somewhat orthogonal to records/tuples, since there are existing values (eg, strings and numbers) that should be considered deeply immutable, so you'd expect to have a separate predicate[3][4] for detecting this, which would be able to effectively search a given value for object references.
In case you're interested I tried to summarise the logic behind the rejection of this behaviour[5] (which I disagree with), but it's very much a TLDR so further reading of linked issues would be required to understand the points made. Interestingly, this post is on an issue raised by the odd person that actually tried to use the feature and naturally ran into this restriction.
Sorry for this massive wall of text, but I think it's hard to capture the various trains of thought concisely.
Thanks for the history! Reading through the issues, I agree with you that some of the motivations against objects in records seem pretty strange. Mostly they seem to be around existing JS-written 'membranes' (related to the SES stuff mentioned above?) getting confused by primitives-containing-objects, depending on which permutation of typeof checks they use. Out of curiosity, do you think that the Shadow Realms proposal they refer to will ever go anywhere?
Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".
> Out of curiosity, do you think that the Shadow Realms proposal they refer to will ever go anywhere?
I haven't really been following the Shadow Realm proposal (I'm not part of TC39, so only familiar with certain proposals), but I don't think it should conflict with R/T.
If R/T values are allowed to be passed between realms, they should effectively be "transformed" such that eg, `f(#[v])` is equivalent to `f(#[f(v)])` (where `f` is the transformation that allows values to be passed between realms). For "deeply immutable" values (no object references), `f(v)` will simply return `v` (eg, `#[42]`, f(#[42])` and `f(#[f(42)])` are all the same) and a membrane should be able to trivially optimise this case.
From this comment[0] it sounds like `f({})` in the current Shadow Realm proposal will throw an error, so I'd expect that `f(#[{}])` would also throw an error.
As you were pointing out, I think the only real contention between R/T and realms is in existing JS implementations of membranes, particularly because they might use the following condition to detect if something is "deeply immutable":
v === null || typeof v !== "object" && typeof v !== "function"
If `typeof #[{}] === "tuple"`, then their `f` function will pass that value through without handling the contained object value by throwing or by creating/finding a proxy.
If `typeof #[{}] === "object"`, it should be fine because `f(#[{}])` will either throw or create/find a proxy for the tuple. There might be some unexpected behaviour around equality of R/T values passed through the membrane, but this is pretty obscure and it should be fixed once the membrane library is updated to handle R/T values.
Personally, I'm still not 100% convinced that the assumptions made from the above condition are important enough to cause such a change to the proposal, but I don't see the value of `typeof #[]` as being a usability issue. Code that needs to check the types of things is a bit smelly to me, but in cases where you do need to check the type, `typeof v === "tuple"` and `Tuple.isTuple(v)` both seem usable to me, so just making `typeof #[] === "object"` should be fine and it solves this hypothetical issue. This is similar to array objects, which are also fundamentally special (`Object.create(Array.prototype)` is not an array object) and are detected using `Array.isArray(v)`.
> Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".
Agreed, and I've pointed out[1] that even the current proposal doesn't address this, since unless you've done some defensive check on `x`, there's nothing stopping someone passing a mutable object for `x` instead of a record. If you do want to perform a dynamic[2] defensive check, perhaps you should be asking "is it deeply immutable?" or even checking its shape rather than "is it a record?".
[2] If you're using a type system like TypeScript, this check should happen statically, because you'll use a type that specifies that it's both a record and the types of the properties within it, so your type will encode whether or not it contains mutable objects
I thought the German language deprecated the use of ß years ago, no? I learned German for a year and that's what the teacher told us, but maybe it's not the whole story
Se fareblus oni, jam farintus oni. (It definitely won't happen on an echo-change day like today, either. ;))
Contra my comrade's comment, Esperanto orthography is firmly European, and so retains European-style casing distinctions; every sound thus still has two letters -- or at least two codepoints.
(There aren't any eszettesque bigraphs, but that's not saying much.)
> An other baffling design I've encountered in the UK is a roundabout with traffic lights half-way through.. Wasn't the concept based on removing traffic lights to fluidify traffic..?
This is indeed a weird one from a US perspective.
The way to think about it is not as a roundabout with traffic lights, but as a light-controlled intersection in the shape of a roundabout.
A roundabout-shaped intersection can handle more variations than a normal intersection, you can have more than 4 roads, or roads entering at odd angles.
As for what the advantage is of having lights on the roundabout as opposed to on the approach, I have no idea.
I wouldn't be so sure. AI can ingest far more information about humans than a human ever could. It has read our stories and understands our languages. AI might have more to say about humans than we do ourselves.
Of course AI can never truly experience being human, it has no emotions, but it is excellent at mimicry and it can certainly provide a meaningful outside perspective.
Is there anything to say about humanity that is not in the training corpus already?
Every new novel of any merit shows that there is. And the world keeps changing. The experience of being human keeps changing.
Nothing AI has yet done has demonstrated anything at the level of art or mastery. I guess I'm unconvinced that throwing a million stories into the blender and synthesizing is going to produce a compelling one.
Maybe people with good story literacy and cultural comprehension will be able to tell the difference for much longer, maybe even indefinitely. But the majority of people, and I dread that includes me, won't, at some point. I've already fallen for some AI generated music and thought "hey, that sounds pretty good, I'll bookmark it". It's genuinely scary.
I agree and people are missing the bigger issue here.
Energy prices are an existential issue for brick-and-mortar businesses. Restaurants and cafes in particular struggle to pay their energy bills. Even if you have no customers you still have to heat and cool your premises otherwise you will definitely have no customers.
There is a potential for economic collapse if energy prices were to spike further from here, because many of these small businesses would become uneconomical and shut, leading to mass unemployment.
In many parts of the UK this scenario is already a reality but nobody will take notice until it happens in London.
Lean (and iirc Mathematica) use backslash escapes: you type \symbolname and the symbol is inserted by your editor.
You can also imbue the backslash escape sequence with the same meaning as the unicode, so that in the event that the editor didn’t make this replacement it would still mean the same thing.
julia also. Mathematica uses literal “escape” sequences, ie you start a symbol by pressing the esc key and finish with another esc (aside from a bunch of bindings for commonly used things).
Mauritius could decide to incorporate it as "Mauritius Indian Ocean Territory", hence maintaining the CC. I expect .io owners will likely suggest something like that, while showing them how much money they could get from a 10-15% deal similar to what Tuvalu has for .tv. Nobody likes to burn money.
What Musk understands is that technology is a moving target. You shoot for where it will be, not where it is now. Sometimes he's overoptimistic, however a robotaxi seems well timed.
Have you seen the GPT-4o videos? I don't know how anyone can watch those and believe that vision e2e self-driving vehicles are somehow a long way off.
It takes years to manufacture a new vehicle type. When the Cybertruck was announced the SOTA in language models was GPT 2. Now we have multimodal conversational models with vision.
By the time the robotaxi rolls off the production line, the software will be ready. And if the software is ready, then the model of owning cars will be disrupted for many people living in dense urban environments.
The real risk is in not preparing for that eventuality, which is the same type of unpreparedness risk that the legacy launch industry is facing with Starship.
reply