Hacker Newsnew | past | comments | ask | show | jobs | submit | qoez's commentslogin

I think they just did that because of the energy around it for open source models. Their heart probably wasn't in it and the amount of people fine tuning given the prices were probably too low to continue putting in attention there.

There is none. It's just a way for coders to feel or be able to say they "work with AI" imo. Same with doing light wrapper coding to do agents stuff. The real AI work is on actual math and ML with the internet scale data, but only four big companies does that and this is the closest regular coders can get.

Could be a psyops by Anthropic to make people waste Claude tokens and rack up a massive bill.

Tokens generated is a nice metric that they really care about.

This is classic sama policy. With your words act with grace and counter to what observers will think you would. But in actions and behind the scenes take every step to undermine the competition.

It is most definitely against the rules of the sites and illegal, especially on kalshi

Never believe anything sam says

Never believe anything <insert tech ceo here> says

Sam in very particular here. This guy will say whatever for status and "power".

Oh look! Altman just made a deal with the DoW

Chatgpt, now with 10% more war crimes

Lmao, indeed, not even 24 hours later

> tonight, we reached an agreement with the Department of War to deploy our models in their classified network.


Makes me think that mass analysis of archive.org websites (on a much larger scale than 2000 sites) for color distribution from screenshots or other stuff like this is a cool project ripe for picking.

> It took them two months, to develop chip for Llama 3.1 8B. In the AI world where one week is a year, it's super slow. But in a world of custom chips, this is supposed to be insanely fast.

LLama 3.1 is like 2 years at this point. Taking two months to convert a model that only updates every 2 years is very fast


2 months of design work is fast, but how much time does fabrication, packaging, testing add? And that just gets you chips, whatever products incorporate them also need to be built and tested.

It only looks that way because Llama failed. Good models like Qwen are shipping every 6 months.

Open router is highly subsidized. This might be cheaper in the long run once these companies shift to taking profits

But why not cross that bridge then. By that time you might have much more optimized local infrastructure. Although I do see that someone suffering through the local slowness now is what drives the development of these local options.

I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive

do you remember “moltbook”?

Is it gone?

I can remember at least since the 90s people were saying "Soon I won't even have to work anymore!"

Of course if the proponents are right, this approach may fit to skipping coding :-)

you're right, i should draft one now

Use a clawd, it'll have a GitHub repo and Show HN in minutes to go with it. It's what the cool kids are doing anyhow

What a new an interesting viewpoint which has the ability to change as the evidence does!

Openclaw the actual tool will be gone in 6 months, but the idea will continue to be iterated on. It does make a lot of sense to remotely control an ai assistant that is connected to your calendar, contacts, email, whatever.

Having said that this thing is on the hype train and its usefulness will eventually be placed in the “nice tool once configured” camp


What surprises me is that this obvious inefficiency isn't competed out of the market. Ie this is clearly such a suboptimal use of time and yet lots of companies do it and don't get competed out by other ones that don't do this


I think the issue is everyone's stuck in the same boat - the alternative to using AI and spending time reviewing is just writing it yourself, which takes even longer. so even if it's not a net win, it's still better than nothing. plus a lot of companies aren't actually measuring the review overhead properly - they see 'AI wrote 500 lines in 2 minutes' and call it a productivity win without tracking the 3 hours spent debugging it later. the inefficiency doesn't get competed out because everyone has the same constraints and most aren't measuring it honestly


Short term gets faster more competitive results than long term.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: