Hacker News new | past | comments | ask | show | jobs | submit | johnobrien1010's comments login

What business did you build around it?

Custom forms for concrete pours. Also did a little bit of work making supports for plaster casting.

I was never really able to sell the advantages to artists, but got some good side gig money for landscaping stuff.

(Just to clarify: the business is wound down, but I personally still use the approach in art projects)


Yeah but without a hardline how would you decide what to publish?


Not publishing results with p >= 0.05 is the reason p-values aren't that useful. This is how you get the replication crisis in psychology.

The p-value cutoff of 0.05 just means "an effect this large, or larger, should happen by chance 1 time out of 20". So if 19 failed experiments don't publish and the 1 successful one does, all you've got are spurious results. But you have no way to know that, because you don't see the 19 failed experiments.

This is the unresolved methodological problem in empirical science that deal with weak effects.


> "an effect this large, or larger, should happen by chance 1 time out of 20"

More like "an effect this large, or larger, should happen by chance 1 time out of 20 in the hypothetical universe where we already know that the true size of the effect is zero".

Part of the problem of p-values is that most people can't even parse what it means (not saying it's your case). P-values are never a statement about probabilities in the real world, but always a statement about probabilities in a hypothetical world where we all effects are zero.

"Effect sizes", on the other hand, are more directly meaningful and more likely to be correctly interpreted by people on general, particularly if they have the relevant domain knowledge.

(Otherwise, I 100% agree with the rest of your comment.)


Publishing only significant results is a terrible idea in the first place. Publishing should be based on how interesting the design of the experiment was, not how interesting the result was.


P-value doesn't measure interestingness. If p>0.05 there was no result at all.


Both of those statements are false. Everything has a result. And the p-value is very literally a quantified measure of how interesting a result was. That's the only thing it purports to measure.

"Woman gives birth to fish" is interesting because it has a p-value of zero: under the null hypothesis ("no supernatural effects"), a woman can never give birth to a fish.


I ate cheese yesterday and a celebrity died today: P >> 0.05. There is no result and you can't say anything about whether my cheese eating causes or prevents celebrity deaths. You confuse hypothesis testing with P-values.


The result is "a celebrity died today". This result is uninteresting because, according to you, celebrities die much more often than one per twenty days.

I suggest reading your comments before you post them.


p-value doesn't measure interestingness directly of course, but I think people generally find nonsignificant results uninteresting because they think the result is not difficult to explain by the definitionally-uninteresting "null hypothesis".

My point was basically that the reputation / carrer / etc of the experimenter should be mostly independent of the study results. Otherwise you get bad incentives. Obviously we have limited ability to do this in practice, but at least we could fix the way journals decide what to publish.


Research is an institution. Just qualify the uncertainty and describe your further work to investigate.


In THEORY yes, but in practice, there are not a ton of journals I think that will actually publish well done research that does not come to some interesting conclusion and find some p<.05. So....


Plenty of journals do, just mostly in fields that don't emphasize P-values. Chemistry and materials science tend to focus on the raw data in the form of having the instrument output included, and an interpretation in the results section.

The peaks in your spectra, the calculation results, or the microscopy image either support your findings or they don't, so P-values don't get as much milage. I can't remember the last time I saw a P-value in one of those papers.

This does create a problem similar to publishing null result P-values, however: if a reaction or method doesn't work out, journals don't want it because it's not exciting. So much money is likely being wasted independently duplicating failed reactions over and over because it just never gets published.


That is what this article is about, changing the expectations of researchers and journals to be willing to publish and read research without p values.


I see. Thanks for clarifying.


Preregistered studies don't have any p values before they're accepted.


You shouldn't publish based on p < 0.05, but on orthogonal data you gathered on the instances that showed p < 0.05.


Another reason that they should never have been allowed to ingest all the books in the first place. Without paying for the rights to use the digital form of the book, a use which is explicitly prohibited by the publisher, they digitized the books anyway. If they used it to train an LLM, and the LLM regurgitates near facsimiles of all the copyrighted works without compensation to the original rights holders, that seems like something that should be illegal.


All the arguments for where to place control over who decides what gets built IMHO are just political power grabs from one constituency or another. Different companies do it differently, and I'm not sure there is one best way. Any time engineering or product or sales or marketing want more power they come up with some reasons why their function should have more control in every company everywhere.

I don't think arguments that any function should always drive can be true, because who is best qualified to make those decisions is based on things like judgement, experience, domain knowledge, and customer understanding.

Instead of saying a specific function should have control, I think empowering the people who have been best a making decisions about scope should do it is the best approach. That can be engineering but that also can be product etc.


The approach is fundamentally flawed. You can’t query an LLM as to whether it has a theory of mind. You need to analyze how its internal logic works.

Imagine the opposite result had occurred, and the LLM had outputted something which was considered a theory of mind… Does that prove it has one, or that it was trained on some data that had something it used which made it sound like it has a theory of mind?


My fridge stopped working last year. We called a repair technician, who swapped out the whole PCB and charged us a few hundred dollars. In retrospect, I think it was one bad relay on the board...

Next time it dies I plan to try to find the defective relay, desolder and resolder it myself. Imagine how much better it would be if there was a read out with an error code on a fridge with easily removable relays you could unplug and replace. I know it is not a priority to make these kinds of things repairable, but I wish it was.


I wasn’t lucky enough to have a “repairable” refrigerator according to the repair man. An 8 cent capacitor had dried out and failed, which I discovered with a $150 tool.


The contacts in a socket can oxidize and go bad. Or the relay can get jostled out of the socket during a move.

It would be cheaper to ship a replacement PCB with the fridge.


I’ve had a fan die TWICE now in my fan. It blows the cold air from the bottom freezer up to the top refrigerated portion, so it’s catastrophic when it happens. At this point I just have a third fan in the garage in a box in reserve so I don’t lose $100+ in groceries. It’s ridiculous how poorly made these “durable goods” are. And how expensive these repairs would have been if I wasn’t the smallest bit handy.


Which part of that is a pun?


"slow burn", like the processors I suppose.


> slow burn


"slow burn"



I think I must be looking at this wrong, how are "Photographic plates and film, exposed and developed, other than motion-picture film" the most complex product? Surely CPUs are harder to make than that? Maybe it is old categories and there is where new things like CPUs are found?


It gets worse. “Metal chain” is more complex (1.47) than “nuclear reactors” (1.41). “Stainless steel wire” is more complex (1.21) than “aircraft launching gear” (0.5), and so on.

It’s just a poorly defined metric. “Product complexity” is computed by looking at the countries which export the product. A product is “more complex” if its exporters make a lot of different products, and those things are commonly exported by other countries too.

So photographic equipment is only made in a small number of countries (germany, japan) which are very integrated in the world economy. Processors probably fare worse because Taiwan makes them and is “insufficiently” complex.


Indeed the definition is somewhat of an extrapolation. Product complexity defined in this way probably misses on intricate supply chain dependencies too. E.g., EUV lithography is essential for modern chips, but it is produced by ASML in NL - which is otherwise in steady economic complexity decline for decades (from rank 15 in 1995 to rank 26 in 2021). Small country specialization might not be normalized adequately.

Eyeballing the rankings the methodology is probably biased towards manufacturing materials supply chain complexity rather than "knowledge economy" complexity.

Nevertheless its probably directionally correct at more aggregate levels.


You answered your question.

Making a CPU requires exactly "photographic plates and film, exposed and developed, other than motion-picture film", i.e lithography and more and more extreme wavelengths.


(2015)


Yeah unfortunately the graphics really don't hold up in 2024 /s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: