Hacker Newsnew | past | comments | ask | show | jobs | submit | bigbuppo's commentslogin

Meanwhile, somebody put 8192 arm cores on a chip and ran a risc-v emulator on top of that which emulated a 6502 which then emulated a 288 core xeon and it used 0.01% of the power and outperformed the Intel chip in every other metric 10:1, probably.

Only slightly related, but six years ago I was able to run 400 ZX Spectrum (Z80) emulator instances simultaneously on an AWS graphics workstation.

https://youtu.be/BjeVzEQW4C8?si=0I7UGU0Xz5WUT4ek


I remember that. Neat stuff.

You know, a link would be great for this comment.

Well Linux was booted on an Intel 4004, emulating a MIPS R3000. Looks like it booted in 4.76 days. I don't believe this article was AI fabricated.

https://arstechnica.com/gadgets/2024/09/hacker-boots-linux-o...


Somehow, that still doesn't sound real, but it looks like it is. Wow. Though that one was written by their recently fired hallucination writer.


Ah, nice to see a fellow lover of the finest news publication on the planet.

Too risky.

My inclination is to simply close the window as soon as there's a popup of any sort. If someone did that to you in public you would be within your right to punch them in their face as an act of self defense.

If amazon changes the API they've angered their entire customer base that relies on the API. Sure, some will stick around if they're fully entrenched by the ecosystem, but others will be able to leave, and they will, because hey, S3 is a standard-ish API.

What in the linkedin sudden b2b marketing insight was that.

Maybe make that intelligence per token per relative unit of hardware per watt. If you're burning 30 tons of coal to be 0.0000000001% better than the 5 tons of coal option because you're throwing more hardware at it, well, it's not much of a real improvement.

I think the fast inference options have historically been only marginally more expensive then their slow cousins. There's a whole set of research about optimal efficiency, speed, and intelligence pareto curves. If you can deliver even an outdated low intelligence/old model at high efficiency, everyone will be interested. If you can deliver a model very fast, everyone will be interested. (If you can deliver a very smart model, everyone is obviously the most interested, but that's the free space.)

But to be clear, 1000 tokens/second is WAY better. Anthropic's Haiku serves at ~50 tokens per second.


The best thing about manufacturing in China is that they will make exactly what you specify. The worst thing about manufacturing in China is that they will make exactly what you specify.

no, they will often shortcut when they can get away with it. Companies like Apple just don't let them get away with it

No, but it rarely shits on the carpet.

> No, but it rarely shits on the carpet.

What do you mean "rarely"? It still happens sometimes?


Roulette is a game of chance.

The table had a rough life before it found its forever home. Sometimes it gets scared for seemingly no reason.

Well, that means the AI is garbage. They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect. Wash, rinse, repeat, and eventually they'll claim the new frontier model is the best yet on carwash tests.

> They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect.

Not necessarily. Simply asking models to "check your assumptions" -- note, without specifying what assumptions! -- overcomes a lot of these gotcha questions. The reason it's not in their system prompts by default is I think just a cost optimization: https://news.ycombinator.com/item?id=47040530


Crazy how five years ago this level of AI would be seen as scifi, and now there are people out there who think it's trash because we can trick it if we ask questions in weird ways.

I think the level of ai we have is amazing.

> there are people out there who think it's trash because we can trick it if we ask questions in weird ways.

Some of this sentiment comes form wanting AI to be predictable and for me stumbling into questions that the current models interpret oddly is not uncommon. There are a bunch of rules of thumbs that can be used to help when you run into a cases like this but no guarantee that they will work, or that the problem will remain solved after a model update, or across models.


There are a lot of rules of thumb you can follow to avoid getting bitten by a rattlesnake, but the easiest way is to just not pick up random snakes. I don't know where I'm going with this, but I am going for a walk.

When did Microsoft release that chat bot that went full nazi in a couple of hours?

2016 for those keeping score

An issue in the chat format is that all these models seem bad at recognizing when they have extraneous information from user that can be ignored, or insufficient information from the user to answer the question fully.

This issue is compounded by the lack of probabilities in the answers, despite the machines ultimately being probabilistic.

Notice a human in a real conversation will politely ignore extra info (the distance to car wash) or ask clarifying questions (where is the car?).

Even non-STEM people answer using probabilistic terms casually (almost certainly / most likely / probably / possibly / unlikely).

I suspect some of this is to minimize token usage in the fixed monthly price chat models, because back&forth would cost more tokens.. but maybe I'm too cynical.


The systems recognized the pattern that it looks like a generic article on the internet asking whether someone should walk or drive and answered it exactly as expected based on their training data. None of this should be surprising.

We are the ones fooling ourselves into believing there's more intelligence in these systems than they really have. At the end of the day, it's just an impressive parlor trick.


In that sense the google AI summary search results are a better UX for this type experience

The better UX is that the google ai search summary is easy to ignore.

TFA explains that's not actually the case, not just an assertion, but actual studies that backed it up.

Also... they don't make money by promoting things that are good ideas that make sense. That's why every lucky billionaire tech bro that gets into VC ultimately invests in smart toilets. Ultimately, they just keep putting money into each slot machine they can find until one of them pays out a jackpot. Eventually one of them will make up for all the other losses.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: