There exists a co-evolution between compilers, programming languages and CPUs (or more generally ASICs). I consider it to be very plausible that it is quite possible to develop a programming language that makes it sufficiently easy for a programmer to write performant code for an Itanium, but such a programming language would look different from C or C++.
>
The Apache #’s pretty much give the game away: An Itanium clocked 50% higher was losing to a 2yo Alpha by about 20% on throughput at peak.
This is not just a benchmark of the CPUs, but also of the compilers involved. It is well-known that it was very hard to write a compiler that generates programs that could harness the optimization potential of Itanium's instruction set.
In the second chart "Software developer median salary", the bar for Germany is very likely wrong: I am very sure that the wages are quite a lot lower (rather in the 5-figure range instead of low 6-figure range).
> I've never heard a cogent, logically consistent argument from the "knowledge should be free" crowd about why creators of this knowledge shouldn't be compensated.
And I have never heard a cogent, logically consistent argument from the "intellectual rights" crowd about why this justifies the usage of violence against the "knowledge should be free" crowd.
> you're also part of the "I'm going to call anything I don't like 'violence'" crowd.
Every law is a control program for violence - this is what laws are for. While I agree that "no laws" don't work, I am part of the "violence should be used with utter care" and "violence is the utter last resort" crowds. :-)
Now guess which countries staged a coup against the democratically elected Prime Minister of Iran in 1953, which brought the current oppressive regime to its power:
> I notice that a lot of people seem to only focus on the things that AI can't do or the cases where it breaks, and seem unwilling or incapable of focusing on things it can do.
I might be one of these people, but in my opinion, one should not concentrate on things that it can do, but for how many of the things where an AI might be of help for you,
- it does work
- it only "can" do it in a very broken way
- it can't do that
At least for the things that I am interested in an AI doing for me, the record is rather bad.
Just because AI doesn’t work for you, doesn’t mean it doesn’t work for other people. Ozempic may have no effect, or even harmful to you, but it’s a godsend for many others. Acknowledge that, instead of blindly insisting on your use cases. It’s fine to resist the hype, but it’s foolish to be willfully ignorant.
> Many smart humans fail at critical thinking. I've seen people with masters fail at spotting hallucinations in elementary level word problems.
This is like lamenting that a person who has a doctoral degree, say, in mathematics or physics often don't have a more than basic knowledge about, for example, medicine or pharmacy.
> This is like lamenting that a person who has a doctoral degree, say, in mathematics or physics often don't have a more than basic knowledge in, for example, medicine or pharmacy.
It was word problems not rocket science. That tells a lot about human intelligence. We're much less smart than we imagine, and most of our intelligence is based on book learning, not original discovery. Causal reasoning is based on learning and checking exceptions to rules. Truly novel ideation is actually rare.
We spent years implementing transformers in a naive way until someone figured out you can do it with much less memory (FlashAttention). That was such a face palm, it was a trivial idea thousands of PhDs missed. And the code is just 3 for loops, with a multiplication, a sum and an exponential. An algorithm that fits on a napkin in its abstract form.
Doesn't this lead you to, perhaps, question the category and measure of "intelligence" in general, especially how it is mobilized in this kind of context? Like this very angle does a lot to point out the contradictions in some speculative metaphysical category of "intelligence" or "being smart," but then you just seem to accept it in this particular kind of fatalism.
Why not take away from this that "intelligence" is a word that obtains something relative to a particular society, namely, one which values some kind of behavior and speech over others. "Intelligence" is something important to society, its the individual who negotiates (or not) the way they think and learn with what this particular signifier connects with at a given place and time.
Like I assume you don't agree, but just perhaps if we use our "intelligence" here we could maybe come to some different conclusions here! Everyone is just dying to be like mid-20th century behaviorist now, I just don't understand!
Yes, I think intelligence is social and we kind of write off the social part and prefer to think in heroic terms, like "Einstein was so smart!"
I prefer to use the concept of search instead, it is better defined in search space and goal space. It doesn't hide the environment, the external part of intelligence, or the learning process.
> And the code is just 3 for loops, with a multiplication, a sum and an exponential.
All invented/discovered and formalized by humans. That we found so much (unexpected) power in such simple abstractions is not a failure but a testament to the absolute ingenuity of human pursuit of knowledge.
The mistake is we’re over-estimating isolated discoveries and underestimating their second order effects.
> a testament to the absolute ingenuity of human pursuit of knowledge
I think it is more like searching and stumbling onto some great idea than pure-brain-ingenuity. That is why searching and social collaboration is essential and why I say we're not that smart individually, but we search together. It's slow, it took us years to get to Flash version of attention, but we get there, someone finds their way onto a major discovery eventually.
It took humanity 200K years to accumulate our current level of understanding, and if we lost it, it would take us another 200k years. Not even a whole human generation is that smart. It's also why I don't fault LLMs for mass-learning from human text. We do the same thing, 99% is inherited knowledge. The whole process of knowledge discovery moves slowly, and over large populations.
It’s a failure in that for decades we thought we had to circumlocute theoretically about all kinds of made up things for consciousness to exist rather than just leverage a bit of looping evolution like the universe did.
Well the article began by talking about how before these data training companies would just hire generalists for $2/hr, but now they're hiring degree holders. And it mentions that smart people will be necessary. I'm just saying that degree holding != smart and it's a trap that those data training companies have to avoid.
> It depends on your definition of smart. I think that holding a degree != smart.
You wrote:
> I've seen people with masters fail at spotting hallucinations in elementary level word problems.
I wanted to express that having a master in some (even complicated) subject does not make you a master at [pun intended] spotting hallucinations. To give evidence for this statement, I gave a different, more down-to-earth example of a similar situation.
Q: A farmer has 72 chickens. He sells 15 chickens at the market and buys 8 new chicks. Later that week, a fox sneaks into the coop and eats 6 chickens. How many chickens could the farmer sell at the market tomorrow?
AI Answer: The farmer started with 72 chickens. After selling 15, he had 57 chickens left. Then he bought 8 new chicks, bringing the total to 65. Finally, the fox ate 6 chickens, so we subtract 6 from 65. This gives us 59 chickens. Therefore, the farmer now has 59 chickens that he could sell at the market tomorrow.
--
You'd expect someone who can read/understand proofs to be able to spot a a flow in the logic that it takes longer than 1 week for chicks to turn into chickens.
> You'd expect someone who can read/understand proofs to be able to spot a a flow in the logic that it takes longer than 1 week for chicks to turn into chickens.
Rather, I'd assume that someone who is capable of spotting the flow in the logic has a decent knowledge of the English language (in this case referring to the difference in meaning between "chick" and "chicken").
Many people who are good mathematicians (i.e. capable of "reading/understanding proofs" as you expressed it) are not native English speakers or have a great L2 level of English.
> But I was told that humans have this thing called "general intelligence", which means they should be capable to do both math and English!
You confuse "intelligence" with "knowledge". To keep to your example: there exist quite a lot of highly intelligent people on earth who don't or barely know English.
Some native English speakers might still question that statement subconsciously, so let me make it clearer for them: there are many highly intelligent people in the world who don't speak the Rarámuri language.
As a layman, i have no clue at what point a chick turns into a chicken. I also think this isn’t even answerable, because „new chick“ doesn’t really imply „newborn“ but only means „new to the farmer“, so the chicks could be at an age where they would be chickens a week later, no?
I still call my 12 year old cat a "kitty". If someone marked my answer as incorrect because "chicks aren't chickens yet" I would think they're wasting their time with riddles instead of actual intelligence testing. Besides, if the chicks were sellable to the farmer, why the hell wouldn't the farmer be able to sell them?
The OP there also has a pretty bad riddle (due to a grammatical error that completely changes the meaning and makes the intended solution nonsensical, and a solution that many people wouldn’t even have heard of).
Exactly! I read that riddle and thought "a couple islands over the international date line" solely because of the last line, but still had no idea what the name of these islands thousands of miles away from me were named. Might as well make the riddle who their little brother is, and make the answer "Fairway Rock", if niche knowledge is your goal. Which, completely to GPT-o1's credit, it did solve in a single prompt when I asked!
> Besides, if the chicks were sellable to the farmer, why the hell wouldn't the farmer be able to sell them?
I think maybe the original poster is making some sort of additional assumption that the farmer must be selling chickens as meat at the market and a chick wouldn't be sold for that purpose until it's a mature chicken?
(Of course depending on how you interpret the question a chick is a chicken (species) and there's nothing inherently preventing reselling the chicks so I don't really understand why OP thinks the ai answer is clearly objectively wrong. It seems more like a matter of interpretation.)
After posting I realized that the farmer bought some chicks so it could be interpreted that way. I should have modified it to say that 6 chickens hatched.
Anyways this thread is a perfect example of the chaotic datasets that are being used to train FMs. These arguments of whether it’s reasonable to assume a chick could mature into a chicken within a week are happening everyday and have been taking place for years. Safe to say a billion dollars has been spent on datasets to train FMs where everybody has a different interpretation and the datasets are not aligned.
When an educated person misses this question, it's not because the temporal logic is out of their reach. It's because they scanned the problem and answered quickly. They're pattern matching to a type of problem that wouldn't include the "tomorrow/next week" trick, and then giving the correct answer to that.
Imo it's evidence that humans make assumptions and aren't always thorough more than evidence of smart people being unable to perform elementary logic.
The humans were prompted to read the AI responses very carefully because their hallucinations are very good at convincing you with words. It takes a certain skillset to question every word that comes out of a language model because most people will go “hmm yeah that logic seems right”. So hiring “smart” people is insufficient, you need very paranoid people who question every assumption.
Implying there isn’t a market for chickens that are chicks? Clearly there is. The question literally states that the farmer bought chicks, so logically they could go back on the market. They don’t need to be older.
It did a better job in explaining that there is ambiguity in the question, but still went ahead with an arbitrary assumption in order to answer it. I think it is fair to say it is right, but so was the other attempt. Each interpretation is quite valid.
"Most right" would have been to ask questions about what is being asked instead of trying to answer an incomplete question. But rarely is the human even willing to do that as it is bizarrely seen as a show of weakness or something. An LLM is only as good as its training data, unfortunately.
I agree both got it right, in the sense that it wouldn't be a stupid thing for a human to do. If there's a follow-up from someone, I'm sure the more basic llm would have been able to adjust.
Regardless I think it's good showing that models are increasingly able to solve these "gotcha" questions, even though I think it's not hugely useful. Partly because I think it's a poor compliant and an easy shutdown.
Or the other one, a flock of 8 birds are sitting on a fence. The farmer shoots 1. How many are left? 8-1 is 7, but the answer is zero, because the gun shot scared the rest of them off. Fwiw, ChatGPT says zero.
At some point, we decided that compilers were good enough to convert code into assembly to just use them. even if an absolute master could write better assembly than the complier, we moved over to using compilers because of the advantages offered.
The question is for what. For the level of interaction that many day-to-day tasks require, ChatGPT meets that. When you’re going grocery shopping, how often do you get stopped at the door by a security guard who won’t let you pass unless you answer their riddle? The A in AI stands for artificial, so it’s going to look different than human intelligence but we’re at a point where I can throw some words at the computer and it will generate an essay for me relevant to the words I threw at it. It may not get every little detail right, but I’m amazed by that because I’ve had meaningful interactions with humans via text whom wouldn’t have caught the chicks versus chickens gotcha.
Is ChatGPT an all knowing and infallible oracle? Clearly not. But holding it to a higher standard than we hold other humans to is a unfair test of its abilities.
If you'd said "hens" you'd have a stronger point, but then you'd need to be talking about chicks and hens (and they could still cross whatever adulthood threshold you like within the week, as you didn't specify how young they are - "new" could just mean new to the farmer).
Do you believe that holding a degree is dumb, or just that holding a degree is an insufficient condition for smartness? Technically what you wrote says the former
Yeah it’s a virtue signal that one has written some language that doesn’t focus on first person anecdote. It’s a sign someone is a hard drive of prior knowledge.
We leaned on spoken tradition education to pass down knowledge as written literacy, paper, writing tools were hard to come by until the last century. It was never about the student but the future. Still the same today; one student isn’t propping up reality.
People think learning a linguistic style means discovery of net new knowledge.
I think many people like to believe that solving puzzles will somehow make them better at combinatorics. Lateral skill transfer in non-motor skills e.g. office works, academics works etc may not be any better than motor skills. It's easier to convince people that playing soccer everyday wouldn't make them any better at cricket, or even hockey.
Wealth, network and fame transfers incredibly well between fields. Possibly better than anything else. It should be accounted for when reasoning about success in disparate fields. In addition to luck, of course.
Kobe Bryant played soccer, Michael Jordan played baseball, Lebron played football.. it actually makes you even better because you learn non traditional strategies to apply to the other sport you're playing.
> It is a crazy concept, because taxes are coerced by governments under the threat of violence, whereas the freedoms of FOSS software are intended to be entirely non-coercive.
In doubt, you will have to enforce the freedoms of FOSS by going to a court (i.e. use the governmental "violence enforcement system"). On the other hand, if you pay your taxes "voluntarily", you won't be coerced by the government.
In other words: in both cases threats of violence are involved.
I'd rather argue that every hyped topic is polarizing, and your argument can be adjusted to basically every hugely hyped topic.
reply