Hacker News new | past | comments | ask | show | jobs | submit | nyssos's comments login

> I ask because a circle around a cusp in a sheet will get “caught” on the infinite spike; and a circle not around the cusp will end up with weird deformations if it comes close. That suggests there’s something “topological” about such a cusp, akin to there being a hole.

Your intuition is right, this is an example of a topological defect, but it's not directly related to circles having the "wrong" sizes. It's topological in that it's just a property of how your sheet is connected, and can't be changed by continuous deformations. Black holes are not topological defects in the same way, because they're not really "infinite spikes".


Loosely speaking, it's a structure that can't be continuously transformed away. For example, consider a (frictionless, non-dissipative etc.) fluid flowing in a circle around a given point: a vortex. Since this has net angular momentum, and a fluid moving uniformly in one direction doesn't, it's impossible for the vortex to dissipate. We can move it around, or split it up, or fuse it with other vortices, but we can't just smooth it out.

> But the reason we need markets in the first place is that governments aren't able to discover the right levels of resource allocation on their own.

Markets don't discover "the right" (i.e. social welfare maximizing) resource allocation, they discover, under ideal conditions, a pareto-optimal allocation. This is a very weak condition: "Jeff Bezos owns everything" is pareto-optimal. You need redistribution if you want to turn that into a socially optimal one.

In realistic conditions, they don't even do that: market mechanisms will reliably underinvest in things with positive externalities, and the larger the externality the greater the degree of misallocation. The positive externalities of research are enormous compared to the value captured by the researchers.


In reality, it is in non-market economies that you have a few owning everything: North Korea (the Kim family), USSR (the party), etc.

No market system anywhere ends up with one person owning everything. That is the standard under non-market systems, though.

> The positive externalities of research

... are only relevant if it's actually research, and not something that looks a lot like research without actually being so.


> my key point there was simply to show that the magnetic field isn't really necessary - I wanted to show that it's all part of relativistic contractions made by the electric field.

This isn't quite right, there are field configurations where the magnetic field doesn't vanish in any reference frame. This is actually the typical case: consider, for instance, two point charges moving relative to one another.

The right takeaway from SR isn't that the magnetic field is fake and the electric field is real, it's that both magnetic and electric fields are frame-dependent and it's the electromagnetic field tensor that's the real physical object.


You're 100% correct. The explanations I was reading gave me the impression that we can get rid of the magnetic field - but this only works in special cases. Right now I'm reviewing my notes and realizing that yup - they're both needed and will need to modify my explanation there or simply get rid of it entirely and keep things simple. Once again - thank you for the correction - you're a life saver this is definitely something which I overlooked and I wish I had more time to explain properly!

It absolutely does not. "Interact with each other only gravitationally" has its plain and ordinary meaning: we're ignoring other interactions. No charge, no collisions, no radiation, etc.

QFTs are effective field theories: they describe interactions at a particular energy scale, smoothing over whatever unknown physics is going on at lower (or sometimes higher) length scales. This makes the strengths of interactions scale-dependent. So in QED, for instance, the fine structure "constant" is actually a few percent larger at the GeV scale than it is at zero energy. Fortunately for us, we only need to measure the fine-structure constant at finitely many energies to calculate its value at any energy scale we like. But this is just because QED happens to be well-behaved. When we try to apply the same techniques to the gravitational constant, we end up with infinitely many free parameters for reasons which are too technical to explain here. Any intro QFT textbook will probably have a rough heuristic argument (I used Srednicki), but we don't actually know with certainty that there's no clever way to get all those higher order terms to cancel - we just haven't found one and have no reason to expect one.


This is one of the most eloquent explanations I’ve read. Thank you

Do you have any links to online resources that might go slightly more into the math and the free parameters issue? Without getting too technical with the math too quickly?


> Do C# arrays have a Sum method? I don't think so.

It's not literally a method (it's not, for instance, in the vtable of some array class), but as far as their API is concerned, yeah, they do: https://github.com/dotnet/runtime/blob/main/src/libraries/Sy...


> Correct specification is as hard as correct programming

A complete specification, sure, but that's not really the goal. The things we want to prove are generally just a few key properties: this function always terminates, that one always returns a sorted array, etc. And once you can do that you can impose these things as preconditions: e.g. you must demonstrate that an array has been pre-sorted before you pass it to this function.


Yes.

I am not against all formal methods. Pre and post conditions for loops etcetera. When they are useful they should be used (in many cases they are built in: `for i = k; i < l; i++` has a precondition, a loop condition, and an action all in one statement.)

It is the idea, that was very hot thirty or so years ago, that by using formal specifications and rules of inference we could eliminate bugs.


Sum types are disjoint unions. This `T` has three cases

     L = { tag: "a", payload: string } | { tag: "b", payload: number }
     R = { tag: "b", payload: number } | { tag: "c", payload: boolean }
     T = L | R  
whereas a proper sum type `L + R` would have four.


Isn't that a completely useless distinction?

For all purposes and intents, the "b" type in L and R should be treated the same, no? What do you gain by not doing that??


This often comes up when writing a function which returns a wrapper over a generic type (like Option<T>). If your Option type is T | null, then there's no way to distinguish between a null returned by the function or a null that is part of T.

As a concrete example, consider a map with a method get(key: K) -> Option<V>. How do you tell the difference between a missing key and a key which contains `null` as a value?


This is trivial to model by making your type `T | null | Missing`.


Or just using Option since you would have Some<null> or None in that case.


Maybe trivial to “work around” but there is a difference, ay?

With this type you would have to check/match an extra case!

The type you use there also takes more memory than Option<T> or Maybe<T>. So it has some other downsides.


Only if you're designing both functions ahead of time. In other words, it's not composable.


`T | null` is equivalent to T. You can assign null to `T`>

It's like saying `string | "foo"` it is simply `string` due to subtyping.


... no? Unless you're referring to null as a bottom type, then that doesn't hold. Are you describing some property of a specific language?


No, it isn't "completely useless".

If you have a function that will normally return a string, but can sometimes fail due to reasons, you may wish to yield an error message in the latter case. So you're going to be returning a string, or a string.

It's not what the content of the data is; it's how you're supposed to interpret it. You have two cases, success and failure, and control will flow differently depending on that, not based strictly on the type of data at hand. We just model those cases in a type.


Why would you return `string | string` here? Wouldn't you explicitly mark the error, and return `string | error`? (substitute error with whatever type you want there - null, Error, or your own thing)


> substitute error with whatever type you want there - null, Error, or your own thing

And if I want `error` to be `string` so I can just provide an error message? Proper disjoint sum types let me keep the two sides disjoint, even if they happen to be the same type. Union types will collapse on any overlap, so if I happen to instantiate `error` at `string` then I can no longer tell my error case from my data case.


No disrespect, but that still sounds entirely useless to me. I would never model something as `String | String` as that makes zero sense. You should use a `Result` or `Either` type for that like everyone does.


> You should use a `Result` or `Either` type for that like everyone does.

You have missed the thread context, which is whether `Either a a` (also written `a + a`) has any merits over simply `a` (which is identical to `a | a`). If you're on the `Either` train already, we are arguing over imaginary beef.

> No disrespect, but that still sounds entirely useless to me.

It is disrespectful to say something "makes zero sense", regardless of anything you might say to the contrary. You've misrepresented my point: nobody wants to model something as `String | String`.

If you have, say, an `Int | Bool`, and you pass both sides through some kind of stringification function, you're naturally going to get `String | String` without ever having written that type down yourself. You wouldn't necessarily want to collapse that to `String`, however, because you may -- for instance -- want to give strings and ints different decorations around them before finally flattening them. You might write such a function as something like

    (ib) => ib
      .mapBoth(showInt, showBool)
      .visit(prepend("int: "), prepend("bool: "));
You couldn't run this if the result of the `mapBoth` had type `String | String`: that type is indistinguishable from `String`, and since you can't tell which case you're in, you wouldn't know which tag to prepend.

You could write the same function without passing through `String | String`:

    (ib) => ib.visit(
      sequence(showInt, prepend("int: ")),
      sequence(showBool, prepend("bool: ")));
And yes, in this especially contrived example, perhaps you may find that to make more sense anyway. But in longer pipelines, sometimes separated over multiple functions, often involving generic code, it becomes much harder to simply always fuse steps in this manner. This is why we say sum types (e.g. `String + String`) compose better: they don't behave any differently depending on whether the two sides are the same or not. You have explicit control over when the two sides rejoin.


As you've demonstrated, you can always construct sum types in typescript with the use of explicit discriminants:

    T = {tag: "L", payload: L} | {tag: "R", payload: R}
The real issue is typescript doesn't have pattern-matching, which make operating on these sum types inelegant


The problem is that you're forced to have four possible states

1. err != nil, nondefault return value

2. err != nil, default return value

3. err == nil, nondefault return value

4. err == nil, default return value

when often what you want to express only has two: either you return an error and there's no meaningful output, or there's output and no error. A type system with tuples but no sum types can only express "and", not "or".


this is true, but not a problem. Go's pattern of checking the error on every return means that if an error is returned, that is the return. Allowing routines to return a result as well as an error is occasionally useful.


I mean, I wish Go had sum types, but this really isn’t a problem in practice. Every Go programmer understands from day 0 that you don’t touch the value unless the error is nil or the documentation states otherwise. Sum types would be nice for other things though, and if it gets them eventually it would feel a little silly to continue using product types for error handling (but also it would be silly to have a mix of both :/).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: