Hacker News new | past | comments | ask | show | jobs | submit | superposeur's comments login

The problem I’ve always had with over-weighting deathbed advice is that dying people rarely think through the counterfactuals involved. What would actually be the consequence of not working so hard and relentlessly prioritizing personal relationships (as all such advice seems to recommend)? How much worse of a future would result from financial insecurity and lack of career fulfillment? Has the advice giver actually thought through the tradeoffs that lead you to work hard in the first place? Further, dying people’s worlds usually contract to personal relationships only so it makes sense this is the only aspect of life they emphasize.

"Star Trek: The Next Generation" captured this so well with the "Tapestry" episode. It showed that if you do life differently, you will indeed get a different life - but maybe not the life you thought you wanted!

https://en.wikipedia.org/wiki/Tapestry_(Star_Trek:_The_Next_...


Great ep.

This is a good point. You have to strike a balance between immediate and delayed gratification.

I try and conduct myself in a way that future me could look back on present me and say "past me took advantage of life experiences that were only available at the time" (think: youthful adventures, travel, friendships, etc.) but also "past me did a good job of setting present me up for happiness and fulfillment" (think working reasonably hard, being conscientious, financial responsibility, etc.)


Part of this bias is the kind of people dying on a deathbed tend to make less risky choices. You’re underrepresenting motorcycle riders let alone BASE jumpers etc. Long hours seem like the safe option, you’ll rarely get fired for working late. However, it’s easy to be pissed how much extra time you put in when you get laid off etc.

Thus, people looking back have more information to work with and where risk adverse so they likely worked more than they should.


Working outside of normal hours is now a cause for suspicion. Especially in today's WFH environment. It's a prime time to convene with the handler who does the actual work. Or to exfiltrate proprietary information to your superiors in North Korea. Etc.

Whatever it is you need to do, get it done during normal business hours. If you can't manage that, find another job.


It's so bizarre for me to see this perspective in a tech space when my tech-adjacent academic R&D career exposed me to so many people who naturally wanted to pull periodic all-nighter efforts or just live in strange shift patterns that ranged anywhere from night owl to vampire...

I've been asked unpleasant questions about working into the night, and I've seen working outside regular hours listed as a red flag on "how to detect employee fraud" guides. So however bizarre you may think it is, it's real, and companies are well within their rights to behave this way. Remember, in the USA your employer has the right to fire you for any reason except the ones specifically enumerated in the Civil Rights Act, or if it violates your employment contract.

Most people working in "tech" are implementing business functions and processes, and are answerable to people on the business side of things. Academic R&D is a whole different animal.


It's also that you might have a better idea of events that couldn't have been foreseen at the time. Maybe working hard didn't pay off because you lost much of the savings in a bad investment or a bad divorce anyways. Maybe you could have done with fewer savings because of a larger than expected instance or stock reward. Or maybe the fruits of some efforts never materialized anyways. With the information available at the time the decisions might still have been the correct ones.

How do you know whether dying people think through the counterfactuals?

Of all the people I can think of, my future self would absolutely be on the short list for who I would like advice from.

My older self can definitely advice my younger self to not work so much and so hard, without meaning that I should "relentlessly prioritize relationships". (Edit: I already prioritize relationships, but not relentlessly)

In my eyes, this is nothing controversial at all. In this thread I am surprised that the concept of "deathbed advice" provokes so many people.


Agreed, if you look beyond the bro-ey tone of the presentation, it is smart and nontrivial advice he is delivering here. It is so easy to get distracted by complexity (esp with so much competing internet advice). Picking a couple lifts then making the numbers go up on them is effective and underrated.


In any field, what it even means to be good morphs as you go up in skill level. Non mathematicians know only about arithmetic so they often imagine that mathematicians must be really really good at arithmetic. But this isn't so. Likewise, non musicians think what must make a great musician is perfect pitch. But some of the greatest musicians in history didn’t have it while many mediocre ones do. Similarly, non chess players think GMs must be good at calculating zillions of moves in advance, but apparently they only calculate a small set of moves, which somehow are the right ones.

To take an example cited in the article, Einstein was so far up there that it’s nearly impossible for a non physicist to even understand what he was so good at — crude measures like high school grades or “IQ” barely scratch the surface of the skill that he was a genius at.

Now, perfect pitch does modestly correlate with musical ability, mathematicians are better than average at arithmetic, GMs do calculate more moves than the average shmo, and Einstein got much better than average grades (after all he was accepted at ETH). But that’s all, modest correlations.

There is such a thing as talent in music, mathematics, etc. but it isn’t something a psychologist standing outside these domains would ever be able to devise a test for.


Ha ha, first time I’ve seen another reference to this! Back in the day, I got such a kick out of his description that I began imitating it myself, calling it my “David Lynch Special” dish.


It’s worth mentioning that, brilliant as Dirac’s “sea of filled negative energy states” picture was, no one believes that interpretation now. The Dirac equation is better seen as the classical equation of motion for the Grassmann-valued electron field (just as Maxwell’s equations are the classical eom for photon field). There are only positive-energy states (=quantized excitations of the field). I do think popular accounts should begin mentioning this in order not to keep reinforcing the old Dirac sea interpretation.


> no one believes that interpretation now

I know of at least one (tenured) person that does, at least to some degree: Felix Fenster at Regensburg University. When I met him years ago, he said taking the Dirac Sea interpretation seriously was what caused him to come up with his own program for a theory of quantum gravity, called Causal Fermion Systems[0]. I haven't looked into his theory in detail but I did find a reference to the Dirac sea[1]:

> In order to obtain a causal fermion system, we first have to choose a Hilbert space. The space of negative-energy solutions of the Dirac equation (i.e. the Dirac sea) turns out to be a good choice. […] As a side remark, it is worth noting that the Dirac sea vacuum is to be seen as an effective model describing a particular minimizing causal fermion system. It is one particular physical system that we can describe as a minimizing causal fermion system. But we should really only think of it as an effective description, in the sense that it describes only the macroscopic structure of spacetime, whereas its microscopic structure on the Planck scale is essentially unknown. […] The idea of the Dirac Sea did, however, play an important role in the conception of the causal fermion systems framework, and most of the existing literature is written with that point of view in mind. A more detailed motivation for why it is a natural starting point can be found here[2].

[0]: https://causal-fermion-system.com/

[1]: https://causal-fermion-system.com/intro-phys/

[2]: https://causal-fermion-system.com/theory/physics/why-dirac-s...


*Felix Finster

Looks like my auto correction messed up.


Truly a vision of dystopia.


Photons.


Yes, because in point of fact this company is the best at what it does — preventing security breaches. The outage — disruptive as it was — was not a breach. This elemental fact is lost amidst all the knee jerk HN hate, but goes a long way toward explaining why the stock only took a modest hit.


That's a somewhat narrow definition of "security."

The 3rd component of the CIA triad is often overlooked, yet the availability is what makes the protected asset—and, transitively, the protection itself—useful at the first place.

The disruption is effectively a Denial of Service.


> it's much easier to conduct controlled experiments.

Very true. But this means more statistics and controls are necessary to get solid result from a social science experiment then a particle physics experiment, no? Clearly, this is practically impossible, but there you go.


> Clearly, this is practically impossible

No it's not? You put more money into the studies and you can do bigger, better versions of them.

A major obstacle to putting more money into studies: people jerking themselves off about how soft sciences are a joke and hard sciences are Super Serious Business.


As my sister who is studying one of the soft sciences put it to me when i pointed out the lack of rigor compared to the hard. "sure we could make psychology a hard science but pesky ethics boards wont approve me raising batches of several hundred human clones in controlled environment for each test"


But why do those fields deserve more money, when at least a large part of the problem is cultural.

One example is the famous reluctance to publish negative results in psychology. Nearly all published results in (collider) particle physics are negative.

If senior faculty prefer to only hire people with a string of published postive findings, you are literally encouraging p-hacking. Again, they are not "bad" people, it is just that the system the senior people have setup in that field is not conducive to doing good science.


> But why do those fields deserve more money

Because it'd be good to understand what makes people happy, for example. Or what enables relationships to thrive. Or when different forms of government are suitable or unsuitable to solve a set of problems, etc.

Sorry to break it to the hard-sciencers, but the vast majority of opportunities left in the western world to improve people's lives is not particle accelerators, it's answering questions like: "what actually helps people feel satisfied in life, loved in their relationships, and belonging in their community?"

> At least a large part of the problem is cultural

Is it? Why so?

Negative results aren't published in almost any field, and that's actually a good on ramp to the discussion we should be having, which is about the broken incentives of science and scientific publishing specifically. The broken incentive model isn't special to softer sciences and it has far more dire consequences in other domains.

You can't possibly think that soft sciences are the only ones hiring people with a string of positive results... right?


> Sorry to break it to the hard-sciencers

Believe me, you aren't "breaking" anything to anyone. If you could solve the secret of happiness (your example), no amount of money would be too small.

The issue isn't whether social science would be good to figure out. Definitely it would, to the extent there is actually a "thing" to figure out, which may be true and may not; i.e., "what makes people happy" may be so contingent and/or so ineluctably open to interpretation that it makes no sense as a rigorizable concept. (There is nothing wrong with unrigorous concepts, btw, these have been fruitfully explored by the poets and philosophers and therapists.)

Ok, so even granting that there is a stable, rigorizable "truth" for the social sciences to discover, the issue is whether the methods and analyses as they have been practiced are effective or even could be tweaked to be effective. Clearly, they aren't. And not just a few bad apple studies, but seemingly the whole darn lot.


> Clearly, they aren’t

Arguing is easy when you just assert your conclusion eh?


I agree that studying psychology better could be beneficial. Is it possible? Or more to the point, is it merely a matter of money, as you said?

I said a large part of the problem is cultural, I did not say that psychology is the only field with cultural problems. I'm not sure how you got that idea.


No no, what you said is that it's "clearly practically impossible" to have more statistics, more controls, etc. to get higher powered studies of high-chaos questions like the ones asked in the soft sciences.

I said, to that point: no it's not. You just do bigger, longer term, more complete studies. The limiting factor on this -- right now -- is typically money. Perhaps you can pour infinite money and problems with e.g. recruitment or monitoring will still prevent us from getting to statistical power, but maybe not.

That is not the only problem social sciences face, but most of the problems they face are not exclusive to social sciences whatsoever, which then prompts the obvious question of why they get so much flak.


I never said anything about practicality, you are confusing my replies with someone else's. I said this

"But why do those fields deserve more money, when at least a large part of the problem is cultural."

More money won't fix the cultural problems with the field. Maybe the lack of a universal quantitative framework makes issues that are slightly bad in some fields much worse in the social sciences. I don't know. But it's naive to just say other fields have the same cultural problems so we can ignore it for MY field.

There's a famous anecdote where Dyson came to Fermi with a theory for weak decay (I think). There were a handful of datapoints that matched experimental results.

However, Fermi threw out Dyson's theory, despite the empirical agreement, because it had 5 free parameters ("with 3 parameters I can fit an elephant, and with 4, I can make it waggle its trunk is the quote" IIRC). This is difficult, but essential criticism all fields need.

AND all this is apart from the fundamental question of whether important generic scientific truths can really be gleaned from the social sciences.


The question of "do those fields deserve resources" is answered as follows: are there interesting questions in those fields that we should ask and have answered (well)? I think the parent poster is saying: yes, there are.

This question is orthogonal to the question of whether the organizations currently conducting research in those areas are well-organized. You could fund them well and also demand re-organization as a condition. You could even find other scientists to do this work. But if you don't think the work is important, none of this matters.


That is fair. Is what you suggest possible though?


I mean, a psych experiment will never have an N comparable to a particle physics experiment or be able to reach the 5-sigma threshold for discovery that now prevails in physics. On the other hand, the object of study for psych is intrinsically interesting since we are people and if something reliable can be gleaned then it's certainly worth money. My concern is that "bigger, better" (as you say) would have to include millions of people across cultures and times, tracking longitudinally, with randomization and controls. (Again, more complexity requires more statistics, not less.). Is this practical? Maybe ...


Yes, the devil always seems to be in the details in psychology experiments. Were the experimenters giving subtle cues to the child, and was this simply a test at how deftly the child picked up these cues? What was the exact wording of the “deal” offered to the child and would a wording change alter the results? Was the experiment conducted at a time of food scarcity or abundance? What were the prevailing cultural norms of how a child “ought” to behave? Would the results change if average child age was 6 months older or younger when experiment was conducted? What was in the drinking water and the air and the paint at the testing site? (With strong claims in the literature that all these are correlated with measures such as average population IQ!)

In the face of all these potential confounders, more statistics and controls seem necessary than, say, in a physics collider experiment on electrons (each electron possessing exactly two characteristics, location and spin, and all such electrons behaving identically regardless of location or time). Yet, even in this setting of simplicity and reproducibility, physicists have still found it necessary to establish a stringent, five-sigma threshold for discovery — 3 sigma anomalies come and go. Such a stringent threshold is unthinkable in psychology due to practical considerations. Ergo, it’s hard for to see how psychology can become a reliable empirical science.


I'm not really disagreeing, but the 5-sigma rule is there because the hypothesis is not formulated before you run the experiment.

If you make the hypothesis first, 3-sigma is quite enough. Many physics experiments do exactly that, but famous high-energy ones don't.

(That said, not having an hypothesis beforehand was very common in psychology before the 21st century.)


This is one of those rules of thumb that don’t make any sense to me as someone who works with data in the field. You conduct the same exact experiment with the same conditions and same data and get the same result. But whether you speculate on a hypothesis before or after suddenly changes the significance threshold with no actual change in the underlying data or method? Did you cast some ancient spell when you came up with the hypothesis or something?


> with no actual change in the underlying data or method

Proving a known hypothesis or deciding what you want to know after the fact are completely different methods.

> Did you cast some ancient spell when you came up with the hypothesis or something?

"I'll know it when I see it" is an incredibly vague way of doing science that requires extra rigor somewhere else to compensate.

Or, in a maybe better explanation, testing for multiple hypothesis is subject to this kind of failure:

https://xkcd.com/882/

So you need more data confirming your theory.

But if you state your single hypothesis beforehand, you are at the situation of the top square, with a high-confidence result.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: