It could also lead to a massive crash of capitalism and reevaluation of how our society functions.
It could lead to significant progress in every single research area.
I'm at least very impressed about the amount of open models and that it doesn't hold up that the gap between public and private diverges massivly. Public is probably one year behind.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
I wouldn’t necessarily read a lengthy blog post either just because some friend recommended it to me, and conversely I wouldn’t expect a friend to necessarily read it if I was recommending it without being prompted for recommendations. There needs to be some additional incentive and/or interest.
Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.
I didn't provide much context but, 1) I've had deep conversations with these friends for years based on long articles or videos, and 2) I recommend maybe one or two long form items per year and they used to always review them without, "TLDR?"
So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.
It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.
LLM when it came out, was perfect as an interface between a system and a normal human.
So many people call customer support for issues they could in theory fix themselves. If that LLM system can understand me well enough, its an okay interface.
In worst case you have to escalate anyway. My mum actually told me that she talked to some AI.
And yes normal systems are also not correct often enough. With AI/LLM software will get cheaper which should incresase quality overall.
I dont think ai/llm in this case will change anything.
Relevant change will happen due to the fact that humans can be replaced by AI/LLMs. It was not even imaginable a few years back how a good ai system would even look like. Translaters lost their jobs, basic arists lost their jobs. Small contracts for basic things are gone. The restaurant poster no one cares? AI. The website translation for some small business? no one cares.
Do you want to add any argument so we can discuss this?
I mean, did you not write with ChatGPT and were surprised how well it response?
I'm schocked how well i can talk to an AI through some app like Gemini or ChatGTP. A few years ago i couldn't imagine building such a generic system which such high quality of understanding.
I was playing around with dragon naturally speaking and similiar dictation tools 10 years ago and it was horrible. And that software is expensive.
If you look how normal people use a computer, they are slow just because they don't understand basic drag and drop. Or they are unable to just create some java or php script to convert some data or clean up some data. I would just write a php script reading some csv file and converting stuff around and was faster than everyone around me.
Tool calling is bonkers.
And i tried to break GPT-3, i can literaly write an english sentence and just dropin german words, it was already that good.
Its often enough shitty in doing exactly what i want, but the quality is massive to everything we had before. Massive.
Not the OP, but you wrote “LLM when it came out, was perfect as an interface between a system and a normal human”. That’s a specific and very encompassing claim. I can only think of very simplistic systems (like a microwave oven maybe) where a current LLM could function perfectly as the sole command interface, much less when LLMs first became available. For systems of any significant complexity, it tends to turn into an exercise in frustration and failure modes when the LLM is your only interface (and frequently even when it isn’t).
An LLM can enhance the interface of a system and can be really useful in that despite its imperfections. But that’s a very different claim.
It was a significant jump from whatever we had before to a quality unseen before.
As i mentioned, i threw english and german at it.
How many people can change the time on their microwave?
How many people can ask an LLM through voice or text to change the time of the microwave?
A LLM is an interface to a service if you add a MCP Server. Now i can ask Jira things like "hey whats my current task? And what do i need to do?"
Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
In best case, as long as an LLM is not AGI or ASI, we have good tools with validation behind the LLMs before the LLM becomes the system itself.
> A LLM is an interface to a service if you add a MCP Server. Now i can ask Jira things like "hey whats my current task? And what do i need to do?"
What about configuring your Jira views, and then bookmark the resulting URL with a nice name like "Jira: Tasks in Progress" or "Jira: Important Tickets". That would be way faster than any LLM prompting.
> Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
Those kind of claims would be better if the person has written down the goals before the activity and then score the end result according to those goals. A lot of time, there's a lot of post-rationalization (like "I spent time on it so the result must be good"), especially from non-expert.
My hugo example is real. I'm a software engineere for 15 years, have used other templating engines but i struggled with the hugo docs for setting up the initial templating structure.
Nonetheless, I always also see this in the 'with continues progress, this will become extreme good fast' and my estimate is 5-15 years for significant progress with meaningful impact.
You're on a forum with a disproportionate number of people who are trying to profit from AI and have an interest in promoting that it's a worthwhile time and resource investment. It is a different universe than other places outside this bubble.
As mentioned in my other comment, I just spend too much time on hn so thats why its a new account.
I do not profit from AI but I think the cat is out of the bag. We have companies like google who has so much money, R&D in AI is just something they can afford.
We then have other companies like Microsoft who have to do AI because Google is doing it.
And then we have whole countries who fight the AI race. USA vs. China (and in theory EU but Mistral is not making waves eh?).
So for now, the progress is staggering fast and I do believe whatever critisism people have, you need to spend relevant time following and keeping up with AI to take the right action in time. Decisions regarding long term investment, using AI tools properly instead of getting fired or even funding your own small company and filling a niche.
From a pure nerd pov: Its crazy! Srsly i can generate images and videos and i can talk to a computer and i can generate songs and .... I mean I wished to be alive when Linus asked on a mailing list about people being interested in Linux but this is now something im alive for.
And it solves plenty of problems for me i haven't had any good solutions. Especially the quality of parsing random texsts into semantic json.
I added talking points like the one were i state that people call support just to fix issues they could fix themselves.
My point with my mum should imply that it was successful but for sure at least you are pointing something out and now we can talk about it: My mum talked to an AI and it helped her.
Its still true and shows one of many issues with bitcoin.
Based on bitcoin cryptobros, you need a certain amount of independent miners for the 'quality' of bitcoins. A bitcoin miner if its a state, can operate with a loss a lot longer if not even infinit, than the decentralized normal people (who do not exist anyway).
It also creates a lot of pressure on miners if you do not run your gpus, yuou are also at a loss, which can break the mining for everyone if too many in parallel go offline, than go olnine again because difficulty droped to much.
And if it becomes to volatile, no one wants to risk it anymore
Bitcoin hasn't been viably mineable on GPUs for over ten years. It requires specialized hardware.
As such, mining is typically restricted to those with massive capital investment in a single-purpose, so you really won't see random offloading and onloading of that capacity. As long as it's marginally profitable (with capital investment being a sunk cost, this is the price where it's more than ongoing costs), those miners will keep their machines running.
The original idea was for every single person out there to mine bitcoins on their own computers. Bitcoin screwed that up by allowing big corporations to push out the smaller players. Their big purpose built hardware increased mining difficulty to the point mere mortals need not even apply. Mining on GPUs? Nope, you need purpose built ASICs for this.
Monero is the only cryptocurrency today that's at least trying to implement the original "one CPU, one vote" vision but nobody really cares about it since number doesn't go up.
Man come one not only is this the wrong analysis because Tesla only has one single model, they literaly build 50k cars which are not sold
50.000
My guess, they made them for pushing the Space X IPO. The same why he did this weird Keynote last week with this megafantasic dyson sphere and whatnot vision.
And I find it very weird tbh that it still even sells. Whenever you see a Tesla, its always the same car.
The industry has a lot more money and easier use cases.
A robot like Optimus will not be a household robot for years to come. Why? If it falls, it will crash into some kind of glas from doors to windows etc. If it falls it might crash a human or animal underneath it. It might trip on a toy or stairs and crash into a wall.
I would love to have one robot but 50k? Who buys something for 50k? A normal person has to save up for a car and they need a car, for a household robot you need a lot of income to justify 50k. You will buy a car, flat, kitchen, etc. before you will buy a 50k robot.
10k perhaps is more realistic but than it has to be good. Like if you are alone, I don't think you will recognize normal housework as such a bad thing that you will buy a robot for a small flat.
For families, the robot has to be very good and really save.
If you have a partner not working, you might not be able to afford a robot and that perosn has time anyway to do all of that.
I can imagine having a robot for elder people and some remote service using these robots to do stuff for them but 50k is costly.
I'm not bullish on household robots for the next 10 years at all. Now you have another problem though, if they become really good in an industry setting, guess who will lose their jobs? yeah exactly the people whou should be able to buy these.
>A robot like Optimus will not be a household robot for years to come. Why? If it falls, it will crash into some kind of glas from doors to windows etc. If it falls it might crash a human or animal underneath it. It might trip on a toy or stairs and crash into a wall.
Strong disagree here we have plenty of machinery that we use that could be very dangerous if it fails, but they just slap a disclaimer on it and that’s usually enough the same is going to be done here.
We literally pipe in flammable gases into kitchens, and then burn them there. These fail a lot all the time. Clogged heaters kill people because of carbon monoxide poisoning and electric coil heaters burn down apartments. Pressure cookers are basically controlled bombs. People have hundred pound pitbulls in their houses and even though we trust them they’re technically inherently unpredictable. There are a lot of dangerous things in our homes that we accept and build guard rails around a robot doesn’t seem that much crazier.
The stove is in one place, it produces heat, we train everyone that this is dangerous including our kids (firefighters showed me in 1th grade what an oil fire is).
We added stinky smell to gas.
Pressure cookers are only used by 1-2 people in the kitchen and not that regularly. But a friend of mine actually burned himself through the steam when he opened it up.
The pitbull things is dangerous. But they do'nt run through window and if they fall down stairs, they won't kill someone.
Look we will see how long it will take but i do not think we will see household robots soon. Im confident we will see them in the industry on mass before we see them in real households in relevant numbers.
There are plenty of positions were you have normal humans doing only a handful of tasks.
Checkout youtube on some chinese factories building like rice cooker and co. They have like 10-50 stops were one person only does like 1-5 things. Putting tape on, screwing something together etc.
I can see it as the last niche were the real big specialised and for purpose build robots are just not economicly
But why do you think it would make it easier? Power-steering doesn't mean there are sensors build in or more precise ones.
Your reddit comment references autonomes vehicle and in that case it shouldn't matter if the car also moves a steering wheel no one is using while it moves the wheels which is a lot heavier.
And in case of your nanny, the main argument of your referenced paper are issues with the hands of the driver. In that case it could make it better for the driver, I might agree, but I would then question how the driver acts if the wheel is suddenly / temporarily not aligned with the wheel. I might also argue that in such a case were my thumb gets in the way, it might be an emergency and i wouldn't worry then?
Planes are probably the most controled machines we have. Everything gets checked twice or more, everything gets tracked and there is a clear requirement to do it like this because, as you said, its not possible for humans to control a fighterjet or a big plane.
Cars are non of that and we have billions of them on the street.
Cars also became a lot more expensive due to their complexity which def creates problems for a lot of people who can't afford all of that. I'm really torn by this because I think its very good that my side mirror shows me if there is a car next to me but in our capitalistic economy, we are excluding a lot of people from affordable cars. Drive by wire needs to be cheaper and easier to fix/repair.
Btw. Waymos are slowly learning to drive on highways so I might agree that they drive saver than humans in certain controlled envs. For sure not in any environment.
But that is the "tradeoff" people are going for. What irritates me about Waymos is that they are not really cheaper than taxis and Uber. If we want people to become more mobile ... Waymo does not appear to be the answer.
And that was always the trade that was proposed. Sure, Waymo's (and Uber) will displace a LOT of taxi jobs, but they'll be way cheaper than taxis. Well ... they're not. And at that point, from an economic perspective, this is just taking things away for not much in return.
Once again people get a lot of possible choices and once again they choose for the more expensive one, putting more people out of business, out of a job, and as you say out of society. Now they're saying "yeah but this is good for autistic people and women, who can now travel by taxi without ever seeing anyone". How, exactly, does anyone think that's a good thing for society? Seriously?
Plus I'm a bit of the opinion, if Waymo is already breaking their own proposed social contract now ... imagine what they'll do in 10 years.
It could lead to significant progress in every single research area.
I'm at least very impressed about the amount of open models and that it doesn't hold up that the gap between public and private diverges massivly. Public is probably one year behind.
reply