Hacker News new | past | comments | ask | show | jobs | submit | scarmig's comments login

The level of taxation is not the most relevant thing, even though it attracts the most attention. There are countless socially valuable things that they can be spent on.

The core issue is that Americans don't get back much for their level of taxation. The USA is a fundamentally dog-eat-dog place. Given that, working Americans pay an absurd amount of taxes.

I pay over 40% of my income in local, state, and federal taxes. Despite that, it's not hard to imagine being thrown onto the streets to die after experiencing a single issue. Where is it going? Not to me, and not to any services I'll be able to access in times of difficulty.


Scale is all you need.

For anyone who doesn't know, CTE = common table expression, which is a part of a query representing a temporary result set that can be referenced from other parts of a query.

In this analogy, you're also asking permission from the IRB to ride a bike or skateboard.

In the context of universities, the equivalent of riding a bike or a skateboard here is having people fill out surveys after events, or piloting new services offered by a student health clinic.

(I guess the point of analogies like these are to force us to sweat the details and examples.)


Cosma has a PowerPoint that may be more accessible to a general audience:

https://www.stat.cmu.edu/~cshalizi/2010-10-18-Meetup.pdf


Exactly, the thing that is "universal" about "universal power laws" is that the evidence is weak for them, not because people did a bad experiment, but because they pulled a data analysis method out of their ass.

My one regret of my scientific career was this paper

https://arxiv.org/abs/cond-mat/9512055

where my coauthor asked me "you want to be a theoretician, do you have an explanation?" but I didn't. (I did have some ideas that would have involved knowing more about the structure of the balls, doing more experiments, to give any real answer)

Actually I had reservations about the analysis method (got different answers trying two reasonable things that I saw other people to do), it would have been a good paper had I actually addressed the tension but the coworker didn't want to go there.

Another interesting thing about my paper was that segmentation was a problem as it was in that Science paper, I didn't do anywhere near the level of sensitivity analysis, but I could have gotten different answers if I'd defined integration windows around the pops differently. Probably the very loud pops also had echos, oh my god that problem was so bad when I tried to do the crumpling in an ordinary room and worse when I made my own small 'soundproof' box, it was better in the anechoic chamber I took the real measurements in.

The mystery of explaining the universality is greatly diminished when you can throw out a large number of the papers though...


> But I think it's inappropriate to claim that models like R1 are "good at deductive or inductive reasoning" when that is demonstrably not true, they are incapable of even the simplest "out-of-distribution" deductive reasoning: https://xcancel.com/JJitsev/status/1883158738661691878

Your link says that R1, not all models like R1, fails at generalization.

Of particular note:

> We expose DeepSeek R1 to the variations of AIW Friends problem and compare model behavior to o1-preview, o1-mini and Claude 3.5 Sonnet. o1-preview handles the problem robustly, DeepSeek R1 shows strong fluctuations across variations with distribution very similar to o1-mini.


I'd expect that OpenAI's stronger reasoning models also don't generalize too far outside of the areas they are trained for. At the end of the day these are still just LLMs, trying to predict continuations, and how well they do is going to depend on how well the problem at hand matches their training data.

Perhaps the type of RL used to train them also has an effect on generalization, but choice of training data has to play a large part.


Nobody generalizes too far outside the areas they're trained for. Probably that length, 'far' is shorter with today's state of the art but the presence of failure modes don't mean anything.

The way the authors talk about LLMs really rubs me the wrong way. They spend more of the paper talking up the 'claims' about LLMs that they are going to debunk than actually doing any interesting study.

They came into this with the assumption that LLMs are just a cheap trick. As a result, they deliberately searched for an example of failure, rather than trying to do an honest assessment of generalization capabilities.


What the hype crowd doesn't get is that for most people, "a tool that randomly breaks" is not useful.

The fact that a tool can break or that the company manufacturing that tool lies about its abilities, are annoying but do not imply that the tool is useless.

I experience LLM "reasoning" failure several times a day, yet I find them useful.


>They came into this with the assumption that LLMs are just a cheap trick. As a result, they deliberately searched for an example of failure, rather than trying to do an honest assessment of generalization capabilities.

And lo and behold, they still found a glaring failure. You can't fault them for not buying into the hype.


But it is still dishonest to declare reasoning LLMs a scam simply because you searched for a failure mode.

If given a few hundred tries, I bet I could find an example where you reason poorly too. Wikipedia has a whole list of common failure modes of human reasoning: https://en.wikipedia.org/wiki/List_of_fallacies


Well, given the success rate is no more than 90% in the best cases. You could probably find a failure in about 10 tries. The only exception is o1-preview. And this is just a simple substitution of parameters.

On the one hand, you have DEI policies like Harvard's college admissions or the air traffic controller hiring scandal. And of course DEI advocates always claim that these are obvious perversions of True DEI, which is only about expanding opportunities and never about discriminating against disfavored groups of people.

On the other hand, the tricky bit comes in when it's only in retrospect everyone agrees those were terrible perversions of DEI. When they're actually in place, anyone who criticizes them is considered a racist neo-Nazi.


Has there been a big effort to call the FAA whistleblowers nazis?

I learned about it, went "oh that sucks", but never felt like they were being racist. They have a great evidentiary basis. Its not like some red hat guy screeching about losing his job without being able to show cause.


If 90% of nurses are women, is nursing culture sexist against men?

Yes.

It's somewhat odd to represent a community as being right wing when the worst thing to come from it was a trans vegan murder cult. Most "rationalists" vote Democrat, and if the franchise were limited to them, Harris would have won in a 50 state landslide.

The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.


"trans vegan murder cult" is the best band name ever


> It's somewhat odd to represent a community as being right wing when the worst thing to come from it was a trans vegan murder cult

I was referring to "The Motte", which emerged after the SlateStarCodex subreddit finally banned "culture war" topics. Scott announced it in this post: https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

The Ziz cult did not emerge from The Motte. I don't know why you came to that conclusion.

> Most "rationalists" vote Democrat,

Scott Alexander (of SlateStarCodex) did surveys of his audience. Interestingly, the culture war thread participants were split almost 50:50 between those identifying as left-wing and those identifying as right-wing.

Following the ban on discussion of culture war topics, many of the right-wing participants left for The Motte, which encouraged these conversations.

That's how there came to be a right-wing offshoot of the rationalist community.

The history is all out there. I'm surprised how many people are doubting me about this. You can read the origin story right on Scott's blog, and the Reddit post where they discuss their problems with running afoul of Reddit's content policies (necessitating a move off-platform) is still accessible: https://old.reddit.com/r/TheMotte/comments/uaoyng/meta_like_...

> The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.

No, you're putting words in my mouth. I'm not complaining about a refusal to "progressive pieties as axiomatic". I'm relaying history of rationalist communities. It's surprising to see all of the denial about the topic.


Being a trans vegan doesn't automatically make you left wing. Nor does voting Democrat. Being progressive is a complex set of ideals, just as conservatism is a lot more than whatever the Republican party is doing today.


>The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.

A trans vegan gang murdering police officers is what's come out of this milieu.

I don't see how anyone can say they aren't taking "progressive pieties as axiomatic".

The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion. Up next: Stalin actually a Nazi.


> The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion.

Historically, good 90% of times I have seen what you say, the person or group in question turned out to actually be fascists later on. They just packed their fascism to nicer words at the time of the accusation. It kind of happened that those saying "everything I don't like is fascist" either a.) assumed the claim can not be true without bothering to think about what they read or b.) actually liked fascist arguments and not wanted to have them called what they are.

There is long history of "no one is fascist until they actually nazi salute and literally pay extremists" and "no one is sexist even after they literally stated their opinions on female inferiority again and again" and "no one is racist even as they literally just said that".


It's very worrying about society that people only think Elon is a Nazi because he did the Nazi salute, when everyone was saying he was well before then. What if someone is a Nazi and is smart enough to never do a salute? We might put them in charge of the country?

> The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion.

The right-wing rationalist community (The Motte) arose when Slate Star Codex finally banned culture war topics. Scott Alexander wrote about it: https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

There's also a long history of neoreactionary factions of the rationalist community as well as a fascination with fascist ideals from the likes of Curtis Yarvin.

There's some major retconning going on in this thread where people try to write all of this out of the history of rationalist communities. Either that or people weren't aware, but are resistant to the notion that it could have happened.


Yarvin is authoritarian but very far from fascism, arguably farther than Stalin was.

>The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion. Up next: Stalin actually a Nazi.

That's terminologically wrong, yet practically sensible conclusion. Some European countries in fact ban both communist and nazi ideologies and public display of their symbols as their authoritarian and genocidal tendencies are incompatible with democratic principles in said countries constitution.


Countries like Hungary.

Not a place I think anyone should try and emulate.


This is what arguing in bad faith looks like.

The list of European countries that ban Nazi symbols includes Austria, the Czech Republic, France, Germany, Hungary, Poland, Romania, Italy, Latvia, Lithuania, Luxembourg, Slovakia, and Sweden.

When people talk about EU countries banning of Nazi symbols, they are always referring primarily to Germany. It is the archetypical example of "countries that ban Nazi symbols".

If you want to focus on one country from that list, which is a valid thing to do, you either need to pick the archetype, or acknowledge it and then say why you're focusing on another example from the list instead.

If instead, you immediately pick the one example from that list that suits your narrative, while not acknowledging that every single other example doesn't suit your narrative, that is a bad faith argument.

In any case, recent politics aside, Hungary is an amazing country. I'm not sure about emigrating there, but I definitely recommend visiting.


The point was about banning both soviet and nazi symbols as equally evil


Goulash, tokay wine, vizsla dogs, Franz Liszt. A terrible people.

Edit: Ervin Laszlo, Zsa Zsa Gabor, Peter Lorre, Harry Houdini, a wide selection of pastries. :) Also Viktor Orban, but what can you do.


Probably it's something like "give feedback that's on average slightly more correct than incorrect," though you'd get more signal from perfect feedback.

That said, I suspect the signal is very weak even today and probably not too useful except for learning about human stylistic preferences.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: