Hacker News new | past | comments | ask | show | jobs | submit | prmph's comments login

What happened to the Grand Inga Dam project? I thought that's what this story is about.


What's wrong with just having a table that rolls? Why does it have to walk? Cool project though, but it sounds like a solution in search of a problem.


No, I think it's an engineer in search of a fun challenge


> mainly attenuated

> may reflect

So the study acknowledges that this is all not yet settled. The (potentially damaging) effect of excessive exercise may be stronger what the study found, or it may be neutral (i.e., not actually extending lifespan, but not damaging it either)

It remains to be seen what further studies reveal, but for now the blog is not far off in speculating a potentially damaging effect.


True, but if exercise is potentially actually aging you faster, then it is only a matter of time until your quality of life is prematurely compromised, no?


Not necessarily. It is possible to age gracefully and simply die of old age.

The idea that old === lack of quality of life is a narrative written by Food Inc and Big Pharma. They want us to believe the results of their "offerings" are normal. That's simply not the case.


Aging is more than simply losing your looks.

The aging process affects every system of the body: hearts get weaker, immune systems get weaker, eyesight declines, aches and pains abound, etc.

Weaker systems and organs are more disposed to experience an acute adverse event. Even if you do not have a acute health crisis, having rapidly declining eyesight for example, will definitely impact your quality of life.

I agree that it is possible to age gracefully, but I would say that is a function of having a mostly linear, slow aging process, and excessive exercise may negatively impact that.


Yes of course. But aging is a natural process. Diabetes (due to obesity, shite diet, too sedentary, etc.) is not normal. You're not going to age gracefully with Diabetes or any other Western diet / lifestyle related diseases.

It is possible to age gracefully. Getting older doesn't mean becoming a monthly health crisis. Extra weight is not ok (read: acceptable) if eventually you're going to lose muscle strength, balance, etc.

My point is, the risk of exercizing too much should not be feared as much as not doing it enough, eating poorly" etc. The number of ppl exercising themselves too much is an edge of an edge case. This article is generally new media hyperbolic.


Couldn't agree more!



Thank you!

> Being active may reflect a healthy phenotype instead of causally reducing mortality.

So the people that are healthier do more exercises and also die older but not causally


Good, I also used to think storing timestamps in UTC was sufficient. The examples really explained the problems with that well. One other issue to note is that, if you store dates in UTC, now the fact that they are in UTC is a tacit assumption, probably not recorded anywhere except in organizational memory.

So the new API will be a very welcome addition to the JS standard library.

However, and this is probably a bit off-topic, why are the ECMAScript guys not also addressing the fundamental reasons why JS continues to be somewhat a joke of a language?

- Where are decimals, to avoid things like `parseInt(0.00000051) === 5`?

- Why are inbuilt globals allowed to be modified? The other day a 3rd-party lib I was using modified the global String class, and hence when I attempted to extend it with some new methods, it clashed with that modification, and it took me and hour or two to figure out (in TS, no less).

- Why can't we have basic types? Error/null handling are still haphazard things. Shouldn't Result and Maybe types be a good start towards addressing those?


Being able to modify built-in types is extremely useful in practice. This allows you to backport/"polyfill" newer features, like for example improvements to Intl or Temporal to older runtimes. If all the built-in types were locked down, we'd end up using those built-in types less because we'd more frequently need to use userspace libraries for the same things.

Like, you are asking for a Result type. If this was added to the spec tomorrow and existing objects were updated with new methods that return Result, and you can't modify built-in types, then you can't actually use the shiny new feature for years.

On the specific point of Maybe type, I think it's not very useful for a dynamically typed language like JS to have "maybe". If there's no compiler to check method calls, it's just as broken to accidentally call `someVar.doThingy()` when `someVar` is null (classic null pointer error) versus call `someVar.doThingy()` when someVar is Some<Stuff> or None, in either case it's "method doThingy does not exist on type Some<...>" or "method doThingy does not exist on type None".


> Like, you are asking for a Result type. If this was added to the spec tomorrow and existing objects were updated with new methods that return Result, and you can't modify built-in types, then you can't actually use the shiny new feature for years.

And that's a good thing, because you know that with a specific version of JS, the inbuilts are fixed; no mucking around to know what exactly you have, and no global conflicts. I find it surprising that you would defend this insane state of affairs. If you have worked on really large JS projects, you would see my point immediately.

It is like saying the immutability of functional programming is a bad thing because it limits you. The immutability is the point. it protects you from entire classes or errors and confusion.

> If all the built-in types were locked down, we'd end up using those built-in types less because we'd more frequently need to use userspace libraries for the same things.

This is the correct solution for now.


> - Where are decimals, to avoid things like `parseInt(0.00000051) === 5`?

There is a draft proposal for this: https://github.com/tc39/proposal-decimal

Additionally, BigInt has been available for years: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... (unfortunately does not serve as a u64 / i64 replacement due to performance implications and lack of wrapping/overflow behavior)

> - Why are inbuilt globals allowed to be modified?

> Error/null handling are still haphazard things.

How would you suggest addressing these in a way that is backwards compatible with all existing web content?


> How would you suggest addressing these in a way that is backwards compatible with all existing web content?

Version numbers. Solving the problem in the future but not the past is still better than leaving it unsolved in the future and the past.


> Why are inbuilt globals allowed to be modified?

Because that was the original behavior, and you can't just change that behaviour, or half of the web will break (including that 3rd party lib and every web site depending on it)

> Why can't we have basic types? ... Shouldn't Result and Maybe types

Neither Result nor Error are basic types. They are complex types with specific semantics that the entire language needs to be aware of.


So clean the language up, call it a slightly different name if you want, and let those who want to go modern do so. For those who can't, offer maintenance but no major new features for the original version.

Being wedded to mistakes made in the past for fear of breaking current usage is the root of all programming language evil.


> For those who can't, offer maintenance but no major new features for the original version.

How do you imagine doing that?

> Being wedded to mistakes made in the past for fear of breaking current usage is the root of all programming language evil.

Ah yes, let's break large swaths of the web because progress or something


They could use new directive though, ie:

    "use awesomeness";


    from future import awesomness


> - Where are decimals, to avoid things like `parseInt(0.00000051) === 5`?

I personally dislike decimals, I would prefer a full BigRational that is stored as a ratio of two BigInts (or as a continued fraction with BigInts coefficients.

Decimals solve the 0.1+0.1+0.1!=0.3 error but are still broken under division.


>Why can't we have basic types?

The same reason you couldn't in 1996: because the assumed entry point for the language is someone who can barely conceptually manage HTML; doesn't want to have to deal mentally with the fact that numbers and text are fundamentally different kinds of thing; and - most importantly - absolutely will not accept the page failing to render correctly, or the user being shown an error message from the browser, for any reason that the browser could even vaguely plausibly patch around (just like how the browser has to guess whether an unescaped random stray < is just an unescaped less-than symbol or the start of an unclosed tag, or guess where the ends of unclosed tags should be, or do something sensible about constructs like <foo><bar></foo></bar>, or....)


Exactly, this is my thinking as well; I've mentioned the recursive concept in another comment before.

And I suspect the question of free will is not resolvable until the hard problem of consciousness is solved. The cells in our bodies may not have free will, but we might. The issue involves characterizing what exactly does or does not have free will, which is bound up with the question of consciousness.


Exactly. I've found that even with a greenfield project, there is the tension between keeping things simple and avoiding fully-engineering code so as to quickly get to an MVP, and the fact that code that is under-engineered is creating technical debt that becomes more ossified the more you build on top of it.

My current thinking on a solution to this conundrum is this: try to craft the best architecture and engineering you can up-front _vertically_, but drastically reduce the workload by paring things down _horizontally_.


Indeed, this seems to be the insight around vertical slice architecture? https://www.jimmybogard.com/vertical-slice-architecture/


Reminds me of Jesus' words: Everyone seeking finds...

Yeah there are some things you probably will never be a master of, but if you have the motivation to spend enough time at it, and not give up due to discouragement and reversals, you pretty much will become much better at it than you ever though possible.


So my theory is that probability is an ill-defined, unfalsifiable concept. And yet, it _seems_ to model aspects of the world pretty well, empirically. However, might it be leading us astray?

Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

If it is not a proposition, what does it actually mean? If someone with more knowledge can chime in here, I'd be grateful. I've got much more to say on this, but only after I hear from those with a rigorous grounding the theory.


As a mathematical theory, probability is well-defined. It is an application of a larger topic called measure theory, which also gives us the theoretical underpinnings for calculus.

Every probability is defined in terms of three things: a set, a set of subsets of that set (in plain language: a way of grouping things together), and a function which maps the subsets to numbers between 0 and 1. To be valid, the set of subsets, aka the events, need to satisfy additional rules.

All your example p(X) = 0.5 says is that some function assigns the value of 0.5 to some subset which you've called X.

That it seems to be good at modelling the real world can be attributed to the origins of the theory: it didn't arise ex nihilo, it was constructed exactly because it was desirable to formalize a model for seemingly random events in the real world.


> So my theory is that probability is an ill-defined, unfalsifiable concept. And yet, it seems to model aspects of the world pretty well, empirically.

I have privately come to the conclusion that probability is a well-defined and testable concept only in settings where we can argue from certain exact symmetries. This is the case in coin tosses, games of chance and many problems in statistical physics. On the other hand, in real-world inference, prediction and estimation, probability is subjective and much less quantifiable than statisticians (Bayesians included) would like it to be.

> However, might it be leading us astray?

Yes, I think so. I increasingly feel that all sciences that rely on statistical hypothesis testing as their primary empirical method are basically giant heaps of garbage, and the Reproduciblity Crisis is only the tip of the iceberg. This includes economics, social psychology, large swathes of medical science, data science, etc.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

I'd say it is an unfalsifiable proposition in most cases. Even if you can run lots of cheap experiments, like with coin tosses, a million runs will "confirm" the calculated probability only with ~1% precision. This is just lousy by the standards of the exact sciences, and it only goes downhill if your assumptions are less solid, the sample space more complex, or reproducibility more expensive.


> So my theory is that probability is an ill-defined, unfalsifiable concept

Probability isn’t a single concept, it is a family of related concepts - epistemic probability (as in subjective Bayesianism) is a different concept from frequentist probability - albeit obviously related in some ways. It is unsurprising that a term looks like an “ill-defined, unfalsifiable concept” if you are mushing together mutually incompatible definitions of it.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean?

From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood. And 0.5 means I have zero confidence either way, I have zero evidence either way (or else, the evidence on each side perfectly cancels each other out), I have a complete lack of knowledge as to whether the proposition is true.

> It it a proposition?

It is a proposition just in the same sense that “the Pope believes that God exists” is a proposition. Whether or not God actually exists, it seems very likely true that the Pope believes he does

> If so, is it falsifiable? And how?

And obviously that’s falsifiable, in the same sense that claims about my own beliefs are trivially falsifiable by me, using my introspection. And claims about other people’s beliefs are also falsifiable, if we ask them, and if assuming they are happy to answer, and we have no good reason to think they are being untruthful.


So you response actually strengthens my point, rather than rebuts it.

> From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood.

See how inexact and vague all these measures are. How do you know your confidence is (or should be) 0.5 ( and not 0.49) for example? Or, how to know you have judged correctly the weight of evidence? Or how do you know the transition from "knowledge about this event" to "what it indicates about its probability" you make in your mind is valid? You cannot disprove these things, can you?

Unless you you want to say the actual values do not actually matter, but the way the probabilities are updated in the face of new information is. But in any case, the significance of new evidence still has to be interpreted; there is no objective interpretation, is there?.


> See how inexact and vague all these measures are. How do you know your confidence is (or should be) 0.5 ( and not 0.49) for example?

Well, you don't, but does it matter? The idea is it is an estimate.

Let me put it this way: we all informally engage in reasoning about how likely it is (given the evidence available to us) that a given proposition is true. The idea is that assigning a numerical estimate to our sense of likelihood can (sometimes) be a helpful tool in carrying out reasoning. I might think "X is slightly more likely than ~X", but do I know whether (for me) p(X) = 0.51 or 0.501 or 0.52? Probably not. But I don't need a precise estimate for an estimate to be helpful. And that's true in many other fields, including things that have nothing to do with probability – "he's about six feet tall" can be useful information even though it isn't accurate to the millimetre.

> Or, how to know you have judged correctly the weight of evidence?

That (largely) doesn't matter from a subjective Bayesian perspective. Epistemic probabilities are just an attempt to numerically estimate the outcome of my own process of weighing the evidence – how "correctly" I've performed that process (per any given standard of correctness) doesn't change the actual result.

From an objective Bayesian perspective, it does – since objective Bayesianism is about, not any individual's actual sense of likelihood, rather what sense of likelihood they ought to have (in that evidential situation), what an idealised perfectly rational agent ought to have (in that evidential situation). But that's arguably a different definition of probability from the subjective Bayesian, so even if you can poke holes in that definition, those holes don't apply to the subjective Bayesian definition.

> Or how do you know the transition from "knowledge about this event" to "what it indicates about its probability" you make in your mind is valid?

I feel like you are mixing up subjective Bayesianism and objective Bayesianism and failing to carefully distinguish them in your argument.

> But in any case, the significance of new evidence still has to be interpreted; there is no objective interpretation, is there?.

Well, objective Bayesianism requires there be some objective standard of rationality, subjective Bayesianism doesn't (or, to the extent that it does, the kind of objective rationality it requires is a lot weaker, mere avoidance of blatant inconsistency, and the minimal degree of rationality needed to coherently engage in discourse and mathematics.)


You’re right that a particular claim like p(X=x)=a can’t be falsified in general. But whole functions p can be compared and we can say one fits the data better than another.

For example, say Nate Silver and Andrew Gelman both publish probabilities for the outcomes of all the races in the election in November. After the election results are in, we can’t say any individual probability was right or wrong. But we will be able to say whether Nate Silver or Andrew Gelman was more accurate.


> What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

If you saw a sequence of 1000 coin tosses at say 99% heads and 1% tails, you were convinced that the same process is being used for all the tosses and you had an opportunity to bet on tails with 50% stakes, would you do it?

This is a pragmatic answer which rejects P(X)=0.5. We can try to make sense of this pragmatic decision with some theory. (Incidentally, being exactly 0.5 is almost impossible, it makes more sense to verify if it is an interval like (0.49,0.51)).

The CLT says that probability of X can be obtained by conducting independent trials and the in limit, the average number of times X occurs will approach p(X).

However, 'limit' implies an infinite number of trials, so any initial sequence doesn't determine the limit. You would have to choose a large N as a cutoff and then take the average.

But, is this unique to probability? If you take any statement about the world, "There is a tree in place G", and you have a process to check the statement ("go to G and look for a tree"), can you definitely say that the process will successfully determine if the statement is true? There will always be obstacles("false appearances of a tree" etc.). To rule out all such obstacles, you would have to posit an idealized observation process.

For probability checking, an idealization which works is infinite independent observations which gives us p(X).

PS: I am not trying to favour frequentism as such, just that the requirement of an ideal of observation process shouldn't be considered as an overwhelming obstacle. (Sometimes, the obstacles can become 'obstacles in principle' like position/momentum simultaneous observation in QM and if you had such obstacles, then indeed one can abandon the concept of probability).


This is the truly enlightened answer. Pick some reasonably defined concept of it if forced. Mainly though, you notice it works and apply the conventions.


> If it is not a proposition, what does it actually mean?

It's a measure of plausibility - enabling plausible reasoning.

https://www.lesswrong.com/posts/KN3BYDkWei9ADXnBy/e-t-jaynes...

https://en.wikipedia.org/wiki/Cox%27s_theorem


So here's a sort of hard-nosed answer: probability is just as well-defined as any other mathematics.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean?

It means X is a random variable from some sample space to a measurable space and P is a probability function.

> If so, is it falsifiable? And how?

Yes, by calculating P(X) in the given sample space. For example, if X is the event "you get 100 heads in a row when flipping a fair coin" then it is false that P(X) = 0.5.

It's a bit like asking whether 2^2 = 4 is falsifiable.

There are definitely meaningful questions to ask about whether you've modeled the problem correctly, just as it's meaningful to ask what "2" and "4" mean. But those are separate questions from whether the statements of probability are falsifiable. If you can show that the probability axioms hold for your problem, then you can use probability theory on it.

There's a Wikipedia article on interpretations of probability here: https://en.wikipedia.org/wiki/Probability_interpretations. But it is pretty short and doesn't seem quite so complete.


> For example, if X is the event "you get 100 heads in a row when flipping a fair coin" then it is false that P(X) = 0.5

I think you haven't thought about this deeply enough yet. You take it as self evident that P(X) = 0.5 is false for that event, but how do you prove that? Assuming you flip a coin and you indeed get 100 heads in a row, does that invalidate the calculated probability? If not, then what would?

I guess what I'm driving at is this notion (already noted by others) that probability is recursive. If we say p(X) = 0.7, we mean the probability is high that in a large number of trials, X occurs 70% of the time. Or that the proportion of times that X occurs tends to 70% with high probability as the number of trials increase. Note that this second order probability can be expressed with another probability ad infinitum.


> I think you haven't thought about this deeply enough yet.

On the contrary, I've thought about it quite deeply. Or at least deeply enough to talk about it in this context.

> You take it as self evident that P(X) = 0.5 is false for that event, but how do you prove that?

By definition a fair coin is one for which P(H) = P(T) = 1/2. See e.g. https://en.wikipedia.org/wiki/Fair_coin. Fair coins flips are also by definition independent, so you have a series of independent Bernoulli trials. So P(H^k) = P(H)^k = 1/2^k. And P(H^k) != 1/2 unless k = 1.

> Assuming you flip a coin and you indeed get 100 heads in a row, does that invalidate the calculated probability? If not, then what would?

Why would that invalidate the calculated probability?

> If not, then what would?

P(X) = 0.5 is a statement about measures on sample spaces. So any proof that P(X) != 0.5 falsifies it.

I think what you're really trying to ask is something more like "is there really any such thing as a fair coin?" If you probe that question far enough you eventually get down to quantum computation.

But there is some good research on coin flipping. You may like Persi Diaconis's work. For example his Numberphile appearance on coin flipping https://www.youtube.com/watch?v=AYnJv68T3MM


> By definition a fair coin is one for which P(H) = P(T) = 1/2. See e.g. https://en.wikipedia.org/wiki/Fair_coin.

But that's a circular tautology, isn't it?

You say a fair coin is one where the probability of heads or tails are equal. So let's assume the universe of coins is divided into those which are fair, and those which are not. Now, given a coin, how do we determine it is fair?

If we toss it 100 times and get all heads, do we conclude it is fair or not? I await your answer.


> But that's a circular tautology, isn't it?

No it's not a tautology... it's a definition of fairness.

> If we toss it 100 times and get all heads, do we conclude it is fair or not?

This is covered in any elementary stats or probability book.

> Now, given a coin, how do we determine it is fair?

I addressed this in my last two paragraphs. There's a literature on it and you may enjoy it. But it's not about whether statistics is falsifiable, it's about the physics of coin tossing.


> This is covered in any elementary stats or probability book.

No, it is really not. That you are avoiding giving me a straightforward answer says a lot. If you mean this:

> So any proof that P(X) != 0.5 falsifies it

Then the fact that we got all heads does not prove P(X) != 0.5. We could get a billions heads and still that is not proof that P(X) != 0.5 (although it is evidence in favor of it).

> I addressed this in my last two paragraphs...

No you did not. Again you are avoiding giving a straightforward answer. That tell me you are aware of the paradox and are simply avoiding grappling with it.


I think ants_everywhere's statement was misinterpreted. I don't think they meant that flipping 100 heads in a row proves the coin is not fair. They meant that if the coin is fair, the chance of flipping heads 100 times in a row is not 50%. (And that is of course true; I'm not really sure it contributes to the discussion, but it's true).

ants_everywhere is also correct that the coin-fairness calculation is something you can find in textbooks. It's example 2.1 in "Data analysis: a bayesian tutorial" by D S Sivia. What it shows is that after many coin flips, the probability for the bias of a coin-flip converges to roughly a gaussian around the observed ratio of heads and tails, where the width of that gaussian narrows as more flips are accumulated. It depends on the prior as well, but with enough flips it will overwhelm any initial prior confidence that the coin was fair.

The probability is nonzero everywhere (except P(H) = 0 and P(H) = 1, assuming both heads and tails were observed at least once), so no particular ratio is ever completely falsified.


Thank you, yes you understood what I was saying :)

> I'm not really sure it contributes to the discussion, but it's true

I guess maybe it doesn't, but the point I was trying to make is the distinction between modeling a problem and statements within the model. The original claim was "my theory is that probability is an ill-defined, unfalsifiable concept."

To me that's a bit like saying the sum of angles in a triangle is an ill-defined, unfalsifiable concept. It's actually well-defined, but it starts to seem poorly defined if we confuse that with the question of whether the universe is Euclidean. So I'm trying to separate the questions of "is this thing well-defined" from "is this empirically the correct model for my problem?"


Sorry, I didn't mean to phrase my comment so harshly! I was just thinking that it's odd to make a claim that sounds so obvious that everyone should agree with it. But really it does make sense to state the obvious just in order to establish common ground, especially when everyone is so confused. (Unfortunately in this case your statement was so obviously true that it wrapped around; everyone apparently thought you must have meant something else, and misinterpreted it).


Oh I didn't take it harshly. Just wanted to clarify since you and I seemed on the same wavelength but that part didn't come across clearly :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: