Hacker News new | past | comments | ask | show | jobs | submit | RivieraKid's comments login

I'm surprised to see a huge disconnect between how I perceive things and the vast majority of comments here.

AI is obviously not good enough to replace programmers today. But I'm worried that it will get much better at real-world programming tasks within years or months. If you follow AI closely, how can you be dismissive of this threat? OpenAI will probably release a reasoning-based software engineering agent this year.

We have a system that is similar to top humans at competitive programming. This wasn't true 1 year ago. Who knows what will happen in 1 year.


Nobody can tell you whether progress will continue at current, faster or slower rates - humans have a pretty terrible track record at extrapolating current events into the future. It's like how movies in the 80's made predictions about where we'll be in 30 years time. Back to the Future promised me hoverboards in 2015 - I'm still waiting!

Compute power increases and algorithmic efficiency improvements have been rapid and regular. I'm not sure why you thought that Back to the Future was a documentary film.

Unless you have a crystal ball there is nothing that can give you certainty that will continue at the same or better rate. I’m not sure why you took the second half of the comment more seriously than the first.

Nobody has certainty about the future. We can only look at what seems most likely given the data.

When I see stuff like https://news.ycombinator.com/item?id=42994610 (continued in https://news.ycombinator.com/item?id=42996895), I think the field still has fundamental hurdles to overcome.

Why do you think this is a fundamental hurdle, rather than just one more problem that can be solved? I dont have strong evidence either way, but I've seen a lot of 'fundamental unsurmountable problems' fall by the wayside over the past few years. So I'm not sure we can be that confident that a problem like this, for which we have very good classic algorithms, is a fundamental issue.

This kind of error doesn't really matter in programming where the output can be verified with a feedback loop.

This is not about the numerical result, but about the way it reasons. Testing is a sanity check, not a substitute for reasoning about program correctness.

It's the opposite. I don't think it'll replace programmers legitimately within a decade. I DO think that companies will try a lot in the months and years anyway and that programmers will be the only ones suffering the consequences of such actions.

People somehow have expectations that are both too high and too low at the same time. They expect (demand) current language models completely replace a human engineer in any field without making mistakes (this is obviously way too optimistic) while at the same time they are ignoring how rapid the progress has been and how much these models can now do that seemed impossible just 2 years ago, delivering huge value when used well, and they assume no further progress (this seems too pessimistic, even if progres is not guaranteed to continue at the same rate).

ChatGPT 4 was released 2 years ago. Personally I don't think things have moved on significantly since then.

Really now. I think that deserves a bit more explaination, given the cost per token has dropped by several orders of magnitude, we have seen large changes on all benchmarks (including entirely new capabilities), multimodality is now a fact since 4o, test time compute with reasoning models is making big strides since o1.... It seems on the surface a lot is happening. In fact, I wanted to share one of the benchmark overviews, but none include ChatGPT 4 anymore since it is totally not competitive anymore..

Benchmarks are meaningless in and of themselves, they are supposed to be a proxy for usefulness. I have used Sonnet 3.5, ChatGPT-3, ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, o1, o3-mini, o3-mini-high nearly daily for software development. I am not saying AI isn't cool or useful but I am experiencing diminishing returns in model quality (I do appreciate the cost reductions). The sorts of things I can have AI do really haven't changed that much since I got access to my first model. The delta between having no LLM to an LLM feels an order of magnitude bigger at least than the delta between the first LLM and now.

its bigger, shinier, faster, but still doesnt fly

Exactly. I have been waiting for gpt5 to see the delta, but after gpt4 things seemed to have stalled.

This seems like a bizarre claim on the surface, see also my other message above.

https://epoch.ai/data/ai-benchmarking-dashboard


depends on what you work on in the software field. Many of these LLM’s have pretty small context windows. In the real world when my company wants to develop a new feature, or change the business logic, that is a cross-cutting change (many repos/services). I work at a large org for background. No LLM will be automating this for a long time to come. Especially if you’re in a specific domain that is niche.

If your project is very small, and it’s possible to feed your entire code base into an LLM in the near future, then you’re in trouble.

Also the problem is the LLM output is only as good as the prompt. 99% of the time the LLM won’t be thinking of how to make your API change backwards compatible for existing clients, how to help you do a zero-downtime migration, following security best practices, or handling a high volume of API traffic. Etc.

Not to mention, what the product team _thinks_ they want (business logic) is usually not what they really want. Happens ALL THE TIME friend. :) It’s like the offshoring challenge all over again. Communication with humans is hard. Communication with an LLM is even harder. Writing the code is the easiest part of my job!

I think some software development jobs will definitely be at risk in the next 10-15 years. Thinking this will happen in 1 years time is myopic in my opinion.


> If you follow AI closely, how can you be dismissive of this threat?

Just use a state of the art LLM to write actual code. Not just a PoC or an MVP, actual production ready code on an actual code base.

It’s nowhere close to being useful, let alone replacing developers. I agree with another comment that LLMs don’t cut it, another breakthrough is necessary.


https://tinyurl.com/mrymfwwp

We will see, maybe models do get good enough but I think we are underestimating these last few percent of improvement.


It's a bit paradoxical. A smart enough AI, and there is no point in worrying, because almost everyone will be out of a job.

The problem case is the somewhat odd scenario where there is an AI that's excellent at software dev, but not most other work, and we all have to go off and learn some other trade.


No.

> Instead of blaming all your problems on an external entity like Google

But it's not his fault. I can easily empathize with how he's feeling.

Power bank probably doesn't work well because the update limits charging speed.

What if he doesn't have money for buying a new phone? Also, transferring files via cloud storage can be a non-trivial process, especially for a non-technical person.


This person is on HackerNews, implying at least enough technical expertise to use a website. Google Drive is a website. Other people have offered this person alternative devices free of charge. Yet they are so fixated on a specific outcome (Google just fixing this for them) that they can't see another solution when it's offered up on a plate. They are also neglecting to think beyond the immediate problem: say they get the firmware rolled back somehow - what happens when the device stops charging/is stolen/is dropped and all their files are still on it? Have you never know anyone like this before? Someone who needs all their problems solved for them because they refuse to take responsibility for the situations they have created for themselves through action or inaction?


Charging about as much as their competitors is probably the profit-maximizing strategy.


It's almost certainly not true.


When was the last time this happened to you?


Last Thursday. I called support after 10 minutes to cancel the ride but the car started moving while I was on the call.

That particular one was a traffic light green light with a "do not queue over intersection" area specifically designed to allow cars in my lane to turn onto the main road. The Waymo can't see that there's a gap in the lane it wanted to turn into, and was too conservative about queuing over the intersection at an angle when the light was green like any human would do.


They're already covering 0.25% of total US paved road network per my napkin calculation.


They can handle almost all weather except snow. Freeways are coming soon. From what I heard, San Francisco is way above average in terms of driving difficulty.


The economic theory answer is that people simply switch to jobs that are not yet replaceable by AI. Doctors, nurses, electricians, construction workers, police officers, etc. People in aggregate will produce more, consume more and work less.


>Doctors capped per year by law.

>Trades people only work after having something to do. If you don't have sufficient demand for builders, electricians, plumbers, etc... No one can afford to become one. Nevermind the fact that not everyone should be any of those things. Economics fails when the loop fails to close.


> Doctors

Many replaceable

> Police officers

Many replaceable (desk officers)


It sucks that I would love to be excited about this... but I mostly feel anxiety and sadness.


Same, it's sad but I honestly hoped they never achieved these results and it came out that it wasn't possible or would take an insurmountable amount of resources but here we are ok the verge of making most humans useless when it comes to productivity.

While there are those that are excited, the world is not prepared for the level of distress this could put on the average person without critical changes at a monumental level.


If you don't feel like the world needed grand scale changes at a societal level with all the global problems we're unable to solve, you haven't been paying attention. Income inequality, corporate greed, political apathy, global warming.


AI will fix none of that


And you think the bullshit generators backed by the largest corporate entities in humanity who are, as we speak, causing all the issues you mention are somehow gonna solve any of this?


If you still think this technology is a "bullshit generator," then it's safe to say you're also wrong about a great many other things in life.

That would bug me, if I were you.


They’re not wrong though. The frequency with which these things still just make shit up is astonishingly bad. Very dismissive of a legitimate criticism.


It's getting better, faster than you and I and the GP are. What else matters?

You can't bullshit your way through this particular benchmark. Try it.

And yes, they're wrong. The latest/greatest models "make shit up" perhaps 5-10% as frequently as were seeing just a couple of years ago. Only someone who has deliberately decided to stop paying attention could possibly argue otherwise.


And yet I still can't trust Claude or o1 to not get the simplest of things, such as test cases (not even full on test suites, just the test cases) wrong, consistently. No amount of handholding from me or prompting or feeding it examples etc helps in the slightest, it is just consistently wrong for anything but the simplest possible examples, which takes more effort to manually verify than if I had just written it myself. I'm not even using an obscure stack or language, but especially with things that aren't Python or JS it shits the bed even worse.

I have noticed it's great in the hands of marketers and scammers, however. Real good at those "jobs", so I see why the cryptobros have now moved onto hailing LLMs as the next coming of jesus.


I still find that 'trusting' the models is a waste of time, we agree there. But I haven't had that much more luck with blindly telling a low-level programmer to go write something. The process of creating something new was, and still is, an interactive endeavor.

I do find, however, that the newer the model the fewer elementary mistakes it makes, and the better it is at figuring out what I really want. The process of getting the right answer or the working function continues to become less frustrating over time, although not always monotonically so.

o1-pro is expensive and slow, for instance, but its performance on tasks that require step-by-step reasoning is just astonishing. As long as things keep moving in that direction I'm not going to complain (much).


Well said! There's no way big tech and institutional investors are pouring billions of dollars into AI because of corporate greed. It's definitely so that they can redistribute wealth equally once AGI is achieved.

/s


Anxiety and sadness are actually mild emotional responses to the dissolution of human culture. Nick Land in 1992:

"It is ceasing to be a matter of how we think about technics, if only because technics is increasingly thinking about itself. It might still be a few decades before artificial intelligences surpass the horizon of biological ones, but it is utterly superstitious to imagine that the human dominion of terrestrial culture is still marked out in centuries, let alone in some metaphysical perpetuity. The high road to thinking no longer passes through a deepening of human cognition, but rather through a becoming inhuman of cognition, a migration of cognition out into the emerging planetary technosentience reservoir, into 'dehumanized landscapes ... emptied spaces' where human culture will be dissolved. Just as the capitalist urbanization of labour abstracted it in a parallel escalation with technical machines, so will intelligence be transplanted into the purring data zones of new software worlds in order to be abstracted from an increasingly obsolescent anthropoid particularity, and thus to venture beyond modernity. Human brains are to thinking what mediaeval villages were to engineering: antechambers to experimentation, cramped and parochial places to be.

[...]

Life is being phased-out into something new, and if we think this can be stopped we are even more stupid than we seem." [0]

Land is being ostracized for some of his provocations, but it seems pretty clear by now that we are in the Landian Accelerationism timeline. Engaging with his thought is crucial to understanding what is happening with AI, and what is still largely unseen, such as the autonomization of capital.

[0] https://retrochronic.com/#circuitries


It's obvious that there are lines of flight (to take a Deleuzian tack, a la Land) away from the current political-economic assemblage. For example, a strategic nuclear exchange starting tomorrow (which can always happen -- technical errors, a rogue submarine, etc.) would almost certainly set back technological development enough that we'd have no shot at AI for the next few decades. I don't know whether you agree with him, but I think the fact that he ignores this fact is quite unserious, especially given the likely destabilizing effects sub-AGI AI will have on international politics.


Humanity is about to enter an even steeper hockey stick growth curve. Progressing along the Kardashev scale feels all but inevitable. We will live to see Longevity Escape Velocity. I'm fucking pumped and feel thrilled and excited and proud of our species.

Sure, there will be growing pains, friction, etc. Who cares? There always is with world-changing tech. Always.


> Sure, there will be growing pains, friction, etc. Who cares?

That's right. Who cares about pains of others and why they even should are absolutely words to live by.


Yeah, with this mentality, we wouldn't have electricity today. You will never make transition to new technology painless, no matter what you do. (See: https://pessimistsarchive.org)

What you are likely doing, though, is making many more future humans pay a cost in suffering. Every day we delay longevity escape velocity is another 150k people dead.


There was a time when in the name of progress people were killed for whatever resources they possessed, others were enslaved etc. and I was under the impression that the measure of our civilization is that we actually DID care and just how much. It seems to me that you are very eager to put up altars of sacrifice without even thinking that the problems you probably have in mind are perfectly solvable without them.


By far the greatest issue facing humanity today is wealth inequality.


Nah, it's death. People objectively are doing better than ever despite wealth inequality. By all metrics - poverty, quality of life, homelessness, wealth, purchasing power.

I'd rather just... not die. Not unless I want to. Same for my loved ones. That's far more important than "wealth inequality."


You don't mind living in a country with a population of billions [sic], piled on top of one another? You don't mind living a country ruled by gerontocracy and probably autocracy, because that's what you'll eventually get without death to flush them out.

Senescence is an adaptation.


"You/your loved ones should die because Elon would die too" is a terrible argument. It's not great, but it's not worth dying over. New rich bad people would take his place anyways.

"You should die because cities will get crowded" is a less terrible argument but still a bad one. We have room for at least double our population on this planet, couples choosing longevity can be required to have <=1 children until there is room for more, we will eventually colonize other planets, etc.

All this is implying that consciousness will continue to take up a meaningful amount of physical space. Not dying in the long term implies gradual replacement and transfer to a virtual medium at some point.


> People objectively are doing better than ever despite wealth inequality. By all metrics - poverty, quality of life, homelessness, wealth, purchasing power.

If you take this as an axiom, it will always be true ;).


Longevity Escape Velocity? Even if you had orders of magnitude more people working on medical research, it's not a given that prolonging life indefinitely is even possible.


Of course it's a given unless you want to invoke supernatural causes the human brain is a collection of cells with electro-chemical connections that if fully reconstructed either physically or virtually would necessarily need to represent the original person's brain. Therefore with sufficient intelligence it would be possible to engineer technology that would be able to do that reconstruction without even having to go to the atomic level, which we also have a near full understanding of already.


I agree, save invoking supernatural causes, the human brain is a collection of cells with electro-chemical connections that if fully reconstructed either physically or virtually would necessarily need to represent the original person's brain. Therefore with sufficient intelligence it would be possible to engineer technology that would be able to do that reconstruction without even having to go to the atomic level, which we also have a near full understanding of already.


My job should be secure for a while, but why would an average person give a damn about humanity when they might lose their jobs and comfort levels? If I had kids, I would absolutely hate this uncertainty as well.

“Oh well, I guess I can’t give the opportunities to my kid that I wanted, but at least humanity is growing rapidly!”


> when they might lose their jobs and comfort levels?

Everyone has always worried about this for every major technology throughout history

IMO AGI will dramatically increase comfort levels, lower your chance of dying, death, disease, etc.


Again, sure, but it doesn’t matter to an average person. That’s too much focus on the hypothetical future. People care about the current times. In the short term it will suck for a good chunk of people, and whether the sacrifice is worth it will depend on who you are.

People aren’t really on uproar yet, because implementations haven’t affected the job market of the masses. Afterwards? Tume will show.


Yes, people tend to focus on current times. It's an incredibly shortsighted mentality that selfishly puts oneself over tens of billions of future lives being improved. https://pessimistsarchive.org


Do you have any dependents, like parents or kids, by any chance? Imagine not being able to provide for them. Think how’d you feel in such circumstances.

Like in general I totally agree with you, but I also understand why a person would care about their loved ones and themselves first.


Yes, I have dependents, and them not dying is far more important to me than me being the one providing for them.


Eventually you draw the black ball, it is inevitable.


We've almost wiped ourselves out in a nuclear war in the 70ies. If that would have happened, would it have been worth it? Probably not.

Beyond immediate increase in inequality, which I agree could be worth it in the long run if this was the only problem, we're playing a dangerous game.

The smartest and most capable species on the planet that dominates it for exactly this reason, is creating something even smarter and more capable than itself in the hope it'd help make its life easier.

Hmm.


https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?

You and I will likely not live to see much of anything past AGI.


I would rather follow in the steps of uncle Ted than let AI turn me in a homeless person. It’s no consolation that my tent will have a nice view of a lunar colony


longevity for the AIs


> Sure, there will be growing pains, friction, etc. Who cares?

The people experiencing the growing pains, friction, etc.


You sound like a rich person.


I have been diving deep into LLM coding over the last 3 years and regular encountered that feeling along the way. I still at times have a "wtf" moment where I need to take a break. However, I have been able to quell most of my anxieties around my job / the software profession in general (I've been at this professionally for 25+ years and software has been my dream job since I was 6).

For one, I found AI coding to work best in a small team, where there is an understanding of what to build and how to build it, usually in close feedback loop with the designers / users. Throw the usual managerial company corporate nonsense on top and it doesn't really matter if you can instacreate a piece of software, if nobody cares for that piece of software and it's just there to put a checkmark on the Q3 OKR reports.

Furthermore, there is a lot of software to be built out there, for people who can't afford it yet. A custom POS system for the local baker so that they don't have to interact with a computer. A game where squids eat algae for my nephews at christmas. A custom photo layout software for my dad who despairs at indesign. A plant watering system for my friend. A local government information website for older citizens. Not only can these be built at a fraction of the cost they were before, but they can be built in a manner where the people using the software are directly involved in creating it. Maybe they can get a 80% hacked version together if they are technically enclined. I can add the proper database backend and deployment infrastructure. Or I can sit with them and iterate on the app as we are talking. It is also almost free to create great documentation, in fact, LLM development is most productive when you turn up software engineering best practices up to 11.

Furthermore, I found these tools incredible for actively furthering my own fundamental understanding of computer science and programming. I can now skip the stuff I don't care to learn (is it foobarBla(func, id) or foobar_bla(id, func)) and put the effort where I actually get a long-lived return. I have become really ambitious with the things I can tackle now, learning about all kinds of algorithms and operating system patterns and chemistry and physics etc... I can also create documents to help me with my learning.

Local models are now entering the phase where they are getting to be really useful, definitely > gpt3.5 which I was able to use very productively already at the time.

Writing (creating? manifesting? I don't really have a good word for what I do these days) software that makes me and real humans around me happy is extremely fulfilling, and has allevitated most of my angst around the technology.


We’re enabling a huge swath of humanity being put out of work so a handful of billionaires can become trillionaires.


This is the same boring alarmist argument we’ve heard since the Industrial Revolution. Humans have always turned extra output provided by technological advancement to increase overall productivity.


You’re right, who needs jobs when productivity is high.


It would happen in China regardless what is done here. Removing billionaires does not fix this. The ship has sailed.


And also the solving of hundreds of diseases that ail us.


You need to solve diseases and make the cure available. Millions die of curable diseases every year, simply because they are not deemed useful enough. What happens when your labor becomes worthless?


One of the biggest factors in risk of death right now is poverty. Also what is being chased right now is "human level on most economically viable tasks" because the automated research for solving physics etc. even now seems far-fetched.


Why do you think you’ll be able to afford healthcare? The new medicine is for the AI owners


It doesn’t matter. Statists rather be poor, sick, and dead than risking trillionaires.


You should read about workers right in the gilded age, and see how good laissez-faire capitalism was. What do you think will happen when the only thing you can trade with the trillionaires, your labor, becomes worthless?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: