I didn’t know there was internet censorship at Hitler’s time, preventing people from talking about the dictatorship. Can you back it up with a source? I think you might have to use a LLM to find one.
In my opinion we need to elevate doctors and invest heavily into more nurse practitioners and the like. I don't think you need to see a doctor all the time, for procedures or surgery yes, or if you need someone to actually investigate something.
But there is no point in pretending doctors are going to surge, the current administration is trying to make being a doctor harder, not easier. Doctors also avoid certain fields because it doesn't pay enough. The supply is also artificially restricted. It's a system so fucked that it's better to ignore it and pump out PRNs
It definitely tripped my "self-important bullshit author" detector, which doesn't happen to be a very useful heuristic since it requires me to read the whole article.
But anyway, the examples used are not in any way like another. The "futurist" in particular fundamentally lacked the "they do things well and then get sloppy" which leads me to believe the author has a particular axe to grind about the topic of tech progress.
For a lot of the other examples, the entire point is that the person knows they are being sloppy, they know they're letting issues slip through the cracks. If you know you're doing a bad thing, what use is the analogy to a heuristic? It's not a heuristic to be sloppy, it's just being sloppy.
Once again, the article is lampshading the naive fallacy that if a classifier gets a large percentage of the input distribution right, that classifier is good. Basic literature about designing these classifiers recognizes these fallacies, and knows how to correct for them.
And part of that correction lies in the fact that different domains place different costs on false positives and false negatives.
He describes different scenarios and the apparent contradiction is resolved by weighing the consequences of getting things wrong. The futurist can do whatever, since nobody nothing happens if he gets things wrong or right. In the case with real stakes, where getting things wrong has an immense cost, you just have to accept you will cry wolf a lot of the time.
In the doctor scenario, let's say the doctor is highly skilled, and can tell that if a patient has a certain set of symptoms, he's 99% likely to have cancer. Thing is, only 1 people in 1000 actually do, which means even this amazing doctor will tell 9 healthy people for every sick person that they have cancer. Is the doctor bad at their job? No, he's excellent - the inconvenience caused to these people is dwarfed by letting a sick person go undiagnosed.
If a factory is making Happy Meal toys, and they determined that 1 out of 1000 they produce is faulty, should they invest in a similar screening process? No, the cost of the process, and the cost of handling false positives is way costlier than the minor problem of a child getting a broken toy once in a while.
I am sorry, I did not mean to be rude, I apologize.
I just wanted to say that I still think the article doesn't really show surprising to people who took an undergrad stats/data science course, and the apparent 'conundrums' are well-understood.
There are plenty of blogs, plenty of obvious low quality spam to block, plenty of features to enable allowlist and blocklists. To think for a second that the Google search experience couldn't be made significantly better at the snap of a finger by Google is to live in a fantasy world.
Sure, sure, except for this minor issue that the argument I was responding to didn't mention revenue, they talked about the state of the internet. So why again are you responding to my counter with a straw man?
I have a very simple counter argument: I've tried it and it's not useful. Maybe it is useful for you. Maybe even the things you're using it for are not trivial or better served by a different tool. That's fine, I don't mind you using a tool far away from my codebase and dependency tree. It has not been useful for me, and it's very unlikely it's ever going to be.
Except that's not the argument people are making. They are arguing it will replace humans. They are arguing it will do research level mathematics. They are arguing this is the start of AGI. So if you want to put your head in the sand and ignore the greater message that is plastered everywhere then perhaps some self reflection is warranted.
You have to learn to filter out the people who say "it's going to replace human experts" and listen to the people who say "I'm a human expert and this stuff is useful to me in these ways".
> I have a very simple counter argument: I've tried it and it's not useful. Maybe it is useful for you.
Indeed but the tedious naysaying that this is arguing against is that AI isn't good full stop. They aren't saying "I tried it and it's not for me but I can see why other people would like it".
It might be natural to jump to immediately think the majority was cheating, but as you rightly point out if they were cheating Magnus would have lost. Human players cannot compete with even a couple hours compute on stockfish let alone 24 hours.
Have you ever applied for a grant? Do you understand how science in the USA is funded? How it is unique?
Will the rest of the world absorb the scientists in the US? Probably not, that is honestly their mistake and missed opportunity, but all the same probably not. Fine, so you're right? Scientists just stick it out? No.
The best scientists will definitely leave. Those are the very ones we always wanted to attract. They could have always left, but didn't because the US was the best place for them. Now they leave. Everyone else will try and leave or leave for industry. Even if you only lose 20% of your scientist to other countries and sectors where they are no longer doing productive scientific work, that is a massive blow to progress.
Worst case scenario is China wakes up from its xenophobia and uses this opportunity to replay the US science strategy. Suddenly 20-50% of US scientists can leave for China.
This heavily trips the crackpot index:
1. Zero understanding about the philosophy of mathematics
2. Asking the public and AI models for peer review instead of the scientific community via publication at a physics conference
3. Association with blockchain NTFs a known "technology" whose only application is grifting
4. The very first comment is yet another appeal to an LLM to attempt to understand the gibberish
The world is doomed, I've not hated people more than I do now. If you use LLMs for anything above a stochastic autocomplete I don't consider you a professional or competent in anyway.
This is not a knee jerk mention of Nazis, it is a well known fact that after world war II the US changed strategy to invest in research and pull the best talent from around the world. That was in part motivated by German scientists fleeing Nazi Germany.
reply