Hacker News new | past | comments | ask | show | jobs | submit | nxpnsv's comments login

Csv sure is a step up from excel though…

I use this every day everywhere. Together with pyenv-virtualenv and poetry it dramatically improved my python experience…

if you're using poetry, how does pyenv-virtualenv fit in?

I use poetry for dependency management and pyenv-virtual for venvs. I found this set up to work better for me than poetry's venv. Maybe things have changed, and I'm out of date on that, but now I'm just used to this setup....

I tried tableplus, liked it, it just feels right. I reported a minor error in docs, got a friendly message and a year of license. 10/10


I tried a few ascii-fonts on chatgp, and it interpreted every word as "OPENAI", which is hilarious. Maybe they read the paper :)


How would you randomize 4 day work weeks in an unbiased way?


If you run a trial like this hoping to get an idea of what effect the thing you're doing has, you have two things to worry about (using the terms common at least in econ here). The first is internal validity: Does what you're doing give you a good estimate (i.e., not biased) of the effect you're trying to estimate. The second is external validity: Does your result tell you enough about what would happen if you did the thing you did, but outside your trial context.

If you do a trial of four-day workweeks, you're probably always going to have issues with the latter for any feasible setup. As you allude to, the companies that self-select into participating in such a trial are probably not a good representation of the wider economy, even if you randomize among them.

Now if you randomize rollout of four-day weeks at least among those willing to participate, and you didn't lose about 50% of participants between the start of your trial and the follow-up survey, you would at least get an unbiased estimate of what the policy did for those companies. But alas, they don't even do that part.


You can't, which makes it difficult or maybe impossible to have hard facts about it


seems like a bad excuse to not even try to go beyond shrugging/handwaving


Nope - it's a great reason to not even start spreading more bullshit into the info-sphere.

(The same can be said for almost everything related to nutritional "science".)

If we don't have a good mechanism for knowing something is true or not, we should acknowledge that and approach the problem philosophically / aesthetically.

We need to know what we know so that we can use knowledge as our building blocks, not fake information.

You can't magic science into existence and then use it to make decisions.


It is not true for Physics either.


Who's going to replicate an experiment that takes a $5 billion particle accelerator?

For that matter, who's going to replicate a condensed matter experiment for which a grad student took three years to build the apparatus at a cost of $250,000?

Peer review is an absolute joke. In principle researchers could publish their data sets and reviewers could see if they could replicate the analysis but they don't do it because they know if they tried they'd get different answers, see

https://www.nature.com/articles/d41586-023-03177-1


>>> For that matter, who's going to replicate a condensed matter experiment for which a grad student took three years to build the apparatus at a cost of $250,000?

That was my thesis. To make it worse, the laser I used became unavailable as I was finishing up. My work was adopted by 2 labs but required extensive adaptation.


> Who's going to replicate an experiment that takes a $5 billion particle accelerator?

In the case of the LHC, which I assume is what you're talking about, this is why we have 4 pretty much independent detectors around the LHC looking at different but at least partially overlapping stuff. The Higgs boson was detected by the ATLAS detector, but wasn't considered to be "discovered" until it had also been detected at the CMS detector.

> Peer review is an absolute joke

That strongly depends on the field and the journal in question.


If they flew out the peer reviewer to interview the researcher and take a look at their apparatus or if the peer reviewer loaded up a Jupyter notebook and rechecked the analysis there might be some value in peer review. If peer review is just somebody looking at the paper superficially than it’s of very limited worth.

When I wrote my first paper somebody else had done a very similar experiment at another university and we wound method up peer reviewing each other’s papers. Turned out we were using the same bogus data analysis that was being used in 1000s of other papers. I was vaguely aware of this when I wrote the paper but did not feel psychologically safe at all in an environment where the American Physical Society claimed a PhD had a 3% chance of having a permanent career in the field.

Someone who was later to become a titan in the field was at our lab and was crying some nights because he had no idea where he was going to get his next postdoc wrote a paper about this problem more than a decade later after he had gotten tenure in a statistics journal that physicists probably won’t even read.


That sounds (as I said before) quite specific to the field you found yourself in. That isn't how things work everywhere.

For a relatively recent example - in my field of quantum information someone found in the course of reviewing some newer work that there was a mistake in one of the big foundational papers that this newer work was based on. They shared it with some of their colleagues and together they did some work finding out exactly where the error was and what the impact is on work that is based on it.

The response from the community was they got a talk slot at a pretty prestigious conference (QIP 2023, you can watch the video here: https://www.youtube.com/watch?v=2Xyodvh6DSY) and the work they did has been published here https://quantum-journal.org/papers/q-2023-09-07-1103/ and in a followup paper here https://www.nature.com/articles/s41567-023-02289-9.

My point is essentially that it isn't a big shock that this error wasn't found by peer review when the foundational paper was published in 2008. Even when it is working as well as possible peer review can only detect most errors, not all errors. However when your field is functioning "normally" and everyone does their jobs properly this doesn't matter because the errors that get past peer review get detected by other people building on the work later. On the other hand if your field is not functioning properly and people are knowingly doing stuff that is wrong then peer review can't help you. If the status of the field is so broken that people who know stuff is wrong are still pushing for it to be published then you aren't in a place where peer review can save you.

My opinion is you are incorrectly putting the blame on the (highly imperfect) process of peer review here where it doesn't really belong. Peer review as we perform it today sucks for many reasons, but it isn't to blame for the problems you saw.


It's not to blame for these problems but it fails to counter them.

In medicine see

https://www.cochranelibrary.com/

where there is a systematic process to search the literature and select papers with usable results. I've never seen more than 50% of papers get accepted and the other day I saw one where they found 80 invalid papers and 2 valid papers. It seems to me that peer review isn't accomplishing very much if it is passing through papers that can't be built on. For that matter, why is this work even being funded?


Yeah I wouldn't expect it to counter such problems. Peer review is by its nature review by other people in the same field. If you have a field where "bogus" methods are being used in thousands of papers then you aren't going to be able to get things reviewed properly.

This isn't an indictment of peer review as it is practised in every field.


I’ll say a good field is an island in an ocean of bad fields.

Look at hep-th. If in 100 years people conclude there is no evidence for supersymmetry and extra dimensions something like 90% of that field will be “not even wrong”. There is so much of the drunk looking for his lost keys under a lamp. I mean, there is a huge industry in 2-d gravity where it is possible to calculate things but gravity doesn’t attract in 2-d so the line between that and Newton’s apple is unclear. I’ve talked to astro-ph refugees who think that current theories about accretion disks are “not even wrong”.

Then there are all the folks who tell me firewalls make no sense because there is no such thing as the event horizon (locally) but from the viewpoint of people who fall in it is a tiny difference getting killed at the apparent horizon meanwhile the nonsensical “black hole information paradox” (sorry, no unitarity = no quantum mechanics) still gets breathless and insufferable articles written about it just because somebody famous for their disability made a bet. If young people were able to get established in the field you wouldn’t see people so fixated on bad ideas.


And the same was done for LIGO and the discovery of gravitational waves. 2 observatories, one in Hanford and one in Livingstone.


Yeah if one tried to find rot in academics, I think applied particle physics is one of the furthest from it. The scientists are super allergic to unfalsifiable theories that they start to doubt themselves when prediction is off by nanometers.


> For that matter, who's going to replicate a condensed matter experiment for which a grad student took three years to build the apparatus at a cost of $250,000?

If the results are interesting to anyone and there is more science to do, people will eventually replicate (or fail to do so, like with Majoranas). If not, then why would we care about replication?


Did you write many papers in experimental particle physics? I did, and in my experience, peer review was not a joke. And results from different experiments really are used as validation. Citations are not just used as a metric, but helps to leverage prior work.


Or computer science, at least when the code is also published.

(That said, performance measurements in most papers are complete junk. Most code written by academics is missing trivial, obvious optimisations which could completely change any comparative benchmarking results.)


I'm not sure that the average CS paper really tested the algorithm they thought they were testing. That comes from a lot of commercial projects where people weren't using the algorithm they thought they were using (even simple algorithms) and a few cases of working with code from CS researchers.

There was that time I got a C program from a famous ML researcher, tried to run it and it segfaulted, I pointed gdb at it and realized it was crashing before it got into main(). Understanding why was an educational experience (don't allocate a 8 GB array you never use on a 32-bit machine... It worked for the researcher because he had a 64-bit machine) but it did not increase my faith in the code quality of academic projects.

Some CS researchers are brilliant programmers but the fact is that ordinary devs make a living writing code and CS researchers make a living writing papers.


Allocating a big struct for everything and not using pointers is one of possible techniques of dealing with data in C programs. Anyway having a requirement of 8G RAM is not a good example of something "can not be replicated".


I know all about alternate ways to deal with memory, if I was just programming purely for fun I'd write assembly and not waste time moving the stack pointer around and other meaningless activities associated with calling conventions.

It's a sign of poor quality code.

He probably was using the array for something at one point, quit using it, and never bothered to delete it.

And of course it is C so there are no unit tests despite it being the kind of code that is eminently unit testable (if it wasn't written in C, not like you couldn't write unit tests for C but who does?)


> Most code written by academics is missing trivial, obvious optimisations

I agree with this, but I would pose it the opposite way. Academics shouldn't be prematurely optimizing their code: they don't have the resources for it, nor will their results be representative across all hardware. I routinely see (and publish) papers with novel cryptographic protocols where the reviewers will say things like "compare the running time of your algorithm to a state-of-the-art production system that Apple spent $50m developing and optimizing for their silicon, using hand-tuned assembly." We're not going to win in that benchmark, and we don't have the resources to even compete. Moreover, the resulting code wouldn't make for good reference code. But that doesn't mean we shouldn't publish our work, so that e.g., Apple (or other resources) can begin the process of optimizing it for production.


> Academics shouldn't be prematurely optimizing their code

While I agree with that in principle, the question of whether or not a given algorithm is fast enough to be practically usable is very useful information for the reader of your paper. Do I need to read your paper, understand it, implement it myself, and only then figure out that its 3 orders of magnitude slower than existing algorithms? I know its a lot of work, but as the reader I don't want to do that work either.

I've spent the last few years working on a novel collaborative editing algorithm for text editing. If the algorithm was orders of magnitude slower than existing CRDTs, it would never get adopted. And I want my work to be used. I want to use it, but again - only if its fast enough to be practically usable. I ended up writing a very highly optimised implementation of the algorithm - which took a massive amount of time - to answer that question. For the paper, I spent another couple of days writing a much slower reference implementation that people can read to understand how it works. (The optimised version is thousands of lines, and the reference code is a few hundred lines).

I understand how much work and expertise is needed to do things like this, but I still want someone to do this work. Even if its "We wrote a straightforward implementation of <competing algorithm> and <our algorithm> in plain C code. They perform at roughly similar levels of performance <see chart>. We suspect highly optimised versions of both approaches would also perform at roughly similar levels of performance."


Sorry, what?

It's extremely true in physics. I've published in (good) physics journals.

Computational papers: Authors leave out details because they don't want to give up the goose that lays the golden eggs. They broadly describe the technique, ("solve this PDE computationally"), and anyone who wants to replicate has to craft his/her own solution. If they fail to replicate, the authors just say "your simulation was poor and probably numerically unstable".

This was the norm.

Experimental papers: Same thing. Experimentalists are inventors. They don't buy off the shelf equipment - they build their own. They don't provide details of how they built it in their papers.

At conferences, PIs would openly discuss whether they "believed" a highly cited paper. When I was a new grad student, this troubled me. Why should belief play a role? Just replicate it! A year later: "Oh, of course these studies can't be replicated."

It's one reason I left academia. I was doing theoretical/computational physics, but the whole game reminded me of English literature: The value of your work was always seen with subjective eyes. Your paper could be rejected simply because the reviewer doesn't "believe" it.


> Why should belief play a role? Just replicate it!

If you are talking about theory or numerical simulation papers, just replicating the calculations in the paper are not enough to believe the conclusions. There are many assumptions that go into any calculations, in terms of how much of spherical cow you treat reality as. Changing those assumptions will change the conclusions. The assumptions are not always clearly stated, and even the authors might not realize all their assumptions. It can sometimes take many years or decades for people in the field to figure out how a paper was wrong.

So yes, experts do have to depend on their subject intuition to determine how much they believe some surprising result. Not to mention, if you go through historical papers in your field you will inevitably come across decades long fights where one group is coming to one conclusion in their paper and other is coming to another, before the situation is resolved one way or the other.


Except it's the same problem for experimental papers. My thesis was based on a highly cited experimental study. My role was to "support" that paper with numerical simulations. And yet, at conferences, people would openly discuss if the original, highly cited paper, was fact or fiction.

The guy built/fabricated a device, and reported interesting measurements. Want to know if it's true? Build it yourself! Except when you do and get differing results, there will be quibbling over whether you built it properly. There are always details left out of the original paper that can be appealed to. The original author is king of the field, and people will simply ignore your contrarian findings.

For theoretical/numerical results, it is for sure valid to criticize assumptions. But in the longer time frame, the question shouldn't be "Are these assumptions valid?" but "Is there experimental evidence to support these calculations?" With a lot of the papers published (including mine), I can assure you that no one will ever be able to construct an experiment to (in)validate my results. And that was true for probably over half of the theoretical/numerical results in my field. We were all publishing things that most of us believed could never be connected meaningfully to the physical world. We're not talking string theory or high energy stuff - more like material properties.

The goal was to publish, and convince others. Not to understand reality. Hence, more like literature.


Wezterm rules…


TeX version number asymptotically approaches pi... each new version has another digit, this also makes versions comparable. Clearly this is the better way....


I guess your tesla will be the least of your problems in the event of an apocalypse


Low maintenance vehicle not dependent on oil?


How’s this making ms any money?


More edge usage tends to translate to more Bing queries and clicks, more data, more targeting, more cross selling their services. Google as default search in Chrome, Safari, and Firefox is a major problem for #2 search engine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: