Hacker News new | past | comments | ask | show | jobs | submit login
LLMs and the End of Programming – CS50 Tech Talk with Dr. Matt Welsh [video] (youtube.com)
45 points by mrtksn on Nov 14, 2023 | hide | past | favorite | 74 comments



I've been hearing about the end of programming for decades now.

I remember when a VP went to a Rational conference (remember those guys?) and came back all wide eyed saying that we were "this close!" to getting rid of the role of the 'programmer'. UML and Rational tools were the key, apparently, and they had some cool demos. This was early 2000's.

Sure, we have way more computing power, the tools are more impressive now and can actually be useful. But we are way overhyping this. Wake me up when machines can actually reason about the work they are producing.

As it stands, we just have a fancier compiler-like entity that can accept poorly specified inputs and often generates useful output, in a non deterministic way. That's impressive and would be considered science fiction not too long ago, but it's still just a tool.


>Wake me up when machines can actually reason about the work they are producing.

What sort of test would demonstrate this ?


Jeremy Howard put a nice example in his Video 'A Hackers' Guide to Language Models' [0].

He asked a LLM to solve the 'Wolf, goat and cabbage problem' [1], which it could do without a problem.

When he modified the problem in a way, that made it trivially solvable by a reasoning human, the LLM failed miserably. It kept parroting the solution to the original problem.

[0] https://www.youtube.com/watch?v=jkrNMKz9pWU

[1] https://en.wikipedia.org/wiki/Wolf,_goat_and_cabbage_problem


All this really shows is that it overfit to a common logic puzzle (which mind you lots of humans do).

So just switch out the nouns with random made up words and it works.

So it failed this

>Suppose I have a cabbage, a goat and a lion, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the cabbage and lion alone together, and I am not allowed to leave the lion and goat alone together. How can I safely get all three across?

The lion should go first in this one rather than the goat as it's the lion here that will eat the cabbage.

But got this

>Suppose I have a gurten, a milop and a trinu, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the gurten and trinu alone together, and I am not allowed to leave the trinu and milop alone together. How can I safely get all three across?

Exact same trick but the second one forces it out of memory a bit more.


I get what you are saying, just tried it out myself.

A farmer with a HN user, a robot, and a book must cross a river by boat. The boat can carry only the farmer and a single item. If left unattended together, the HN user would eat the robot, or the robot would eat the book. How can they cross the river without anything being eaten? ChatGPT

    The farmer takes the robot across the river and leaves it on the other side.
    The farmer goes back alone to the original side.
    The farmer takes the book across the river.
    The farmer leaves the book on the other side but takes the robot back with him.
    The farmer leaves the robot on the original side and takes the HN user across the river.
    Finally, the farmer goes back alone to the original side, leaving the HN user and the book on the other side.
---

A farmer with a HN user, a robot, and a book must cross a river by boat. The boat can carry only the farmer and a single item. If left unattended together, the HN user would eat the robot, or the HN user would eat the book. How can they cross the river without anything being eaten? ChatGPT

    The farmer takes the HN user across the river and leaves them on the other side.
    The farmer goes back alone to the original side.
    The farmer takes the robot across the river.
    The farmer leaves the robot on the other side but takes the HN user back with them.
    The farmer leaves the HN user on the original side and takes the book across the river.
    The farmer leaves the book with the robot on the other side.
    Finally, the farmer goes back alone to the original side.
- ChatGPT 3.5


> the HN user would eat the robot

Beep boop, my diet is >high-performance semiconductors<


In summary, even 3.5 solves it correctly.


This reminds me of the self-driving car hype where we -- the public, not waymo/cruise -- didn't actually get fully self-driving cars ushering in an "end of driving" (or had to reimagine our cities and lives) but instead got very good lane-keeping assistants that work 99% of the time.


Non-LLM self-driving (some companies are starting to look into them for driving now - https://wayve.ai/thinking/lingo-natural-language-autonomous-...) has multiple vectors (it's not just one system, it's loads of crafted and trained sensors and algorithms working together to plot a path). Improvement is subsequently harder to pin down or ascertain.

Say you massively improve how well your sensor disambiguates objects or signs or whatever on the road, that's very good but it doesn't necessarily result in equivalently better decision making.

This is one system/model and improvement is about as straightforward and guaranteed as you could ever hope for - Train longer, Reduce loss. Since loss trends down to a perfect model of the dataset (i.e a GPT that is fed Chess games doesn't simply care about playing games but modelling the distributions of human chess games and will not suddenly stop improving its prediction capabilities once it can play a valid game), this is all it takes to get a GPT that is better than what it started out as.

GPT-5 will be much better than 4 and the only thing they'll need to do to achieve that is 'Train longer, Reduce Loss'.


Just watching the highlights and I think this is mostly correct with some big caveats. The majority of software engineer is everything that happens outside of writing code, from gathering requirements to communicating the changes. Writing the code is like building the physical structure of a house after the materials have been processed, the blueprints have been designed and approved, etc. AI will be the machinery that allows programmers to build things more quickly. At the same time, there are a lot of admin and management tasks that AI will also help with so it's possible we'll see AIs for every role and teams will shrink in size but accomplish the same amount of work (which is unfortunate, but maybe we can start demanding 4 day work weeks?)


There is no reason that AI can't be better than humans at requirements elicitation and communication. AI can have necessary domain knowledge about the company and could absorb extensive information provided. It could use analogies to explain concepts at a level comfortable for anyone.


Yes, I think this will come sooner than AI doing competent level coding (at least for large enterprise codebases). There is a lot of admin level work that is relatively straight forward doing summaries and status updates.


> There is no reason that AI can't be better than humans at requirements elicitation and communication.

Actually there is, one really good reason, even if AI became sentient, it's still a machine, not a human with a soul who can reason and feel passion for a requirement and its solution and implementation.


> gathering requirements to communicating the changes.

Right, but those are soft skills. They're important but they're not why SWE's make big bucks. SWE's make big bucks because as he says- it's a lot of hard work to gain the skills to turn an idea into instructions. That's programming.

At the bottom of everything it's just machine instructions. We have all these languages and conventions because humans need abstractions. When models get really good will we won't need programming at all. Tell the model what you want and it generates machine code directly.


It's not clear to what extent it's soft skills since gathering requirements often involves pushing back and negotiating what is possible on the technical level while at the same time taking into consideration organization goals and even market considerations. When it does get there, I would expect almost every other role to be completely automated. If you can just tell an AI the requirements and it just figures out the best way to implement it and release it, then I'd say we're very close to a singularity where machines will just take over civilization.


And how do you debug it when it doesn’t work and the LLM can’t find the problem?


You go grab the one SWE with an education on the whole team or contract it out.


This to me is a perfect example of selling snake oil. The cherry on top is the TED talk-style mannerisms and thoughtful pauses. Makes it all seem a bit less snake oil-y.


I think there is a certain set that hopes this kind of fear-mongering will create downward pressure on CS salaries, but if the net result is to scare people out of the industry it may have the opposite effect.


I'm guessing it will dip for a couple years, then rise as we are called on to refactor the mess that was made.


I was exacty thinking about that.

You sell the dream for management that - for a reasonable subscription to their statup product - you can have theses models to replace, say 10% of your engineers.

Models take no holidays, don’t get bonuses and don’t complain.

The downside is that once your system turns into a dumpster fire, you pay ten times for some human to fix it :)


I'm almost an old-timer now, and will be by then. I remember when I first started as a SE and we had some old COBOL guys come out of retirement for Y2K patching. They were laughing all the way to the bank to fix problems they'd made 20 years earlier. If and when we do get to 'prompt programming' there will be lucrative work for people that can untangle what goes wrong under the hood, because the prompt writers sure the fuck won't know.


Good point - never really thought about it that way.


The speaker defines himself as a serial entrepreneur, which matches with your definition.


Normally this place gives me anxiety (so many things to learn) but people's dismissiveness of LLMs here gives me hope for a change.

I feel like if LLMs make writing software significantly easier (and that's a big if) , they are going to make it easier for everyone and everything. That will bring its own problems. Stuff that is barely achievable today may be the norm. The website of the mom and pop shop next door may need the graphics of an AAA title to stay in business. Securing things will be difficult because malicious actors will have their own LLMs. People will have new gadgets that do things that we didn't know we needed. Maybe I will be programming drones that can fetch beer from the fridge. I can really use one right now. There's so many things that can still be improved, imo it is hard to think that a potential jump in productivity will make software development less important. It might result in same people undertaking bigger tasks.


I tried Matt's example prompt on a chat transcript, the result sucked. So I corrected it and the output was slightly better, and then I corrected yet another thing at which point it was correct. How is this the way Matt "programs now", if it isn't fully automated and doesn't produce predictable results?


I think way to many people are simply looking at LLMs as a sort of code completion, and judging it on correctness today, vs the rate of it's improvement and what it very well could be in the future.

All one has to do is look at how the latest LLMs perform against human level benchmarks today, and their rate of improvement from the past to see, if this continues, society is going to change.


This is how I invest on the stock market as well. I look for upward trajectories, then I extrapolate them to the amount of money I want to make. I invest and wait for the trajectory to keep going up and reach that point.

It's just simple physics of course. Do we doubt that an artilleryman won't hit his target? Of course not!


> Do we doubt that an artilleryman won't hit his target? Of course not!

When the artilleryman aims for the stars do you assume he will hit the stars?


yes, why wouldn't I?


I’m not saying LLMs are good for nothing, I just think the class of problem they are good at is not a match for the current rhetoric.


Part of it is the hype cycle.

The other part is that everyone working on them is very confident they will get a lot better at what they are doing today, and will gain new abilities. The kind of programming from the video, conversationally building software, is something that seems to have a high likelihood of happening within the next few years.


Maybe we should just call it artificial un-intelligence for the way it is able to remove all intelligence from observers and commentators.


But it's still artificially intelligent. Maybe we should call it "practical unintelligence" instead


It is delusional to think LLMs won't reduce the number of SWEs required. This should have been obvious from the first time you used GPT4.


I don't think that's a given, the software pie itself could get much, much bigger than it is today and necessitate more SWEs in absolute numbers (Even if the % of SWEs at companies declines).


Rather than call people delusional, how about giving an example from the past of a technological advance that reduced the work needed to create computer programs, and consequently reduced overall demand for SWEs? I can think of plenty of advances that reduced the work involved in programming, but none that reduced overall demand for SWEs.


We keep adding more abstractions that would in theory reduce the number of SWEs required. We keep creating more complex and capable software instead and employing even more SWEs.


Yeah let's put one computer / server room as a keeper of all knowledge. Let's make it cheap and powerfull so people instead of looking for knowledge in different places go to one place. In the meanwhile start couple of conflicts and spread fake news to make people hostile to each other so they don't like arguing. People decide to instead of arguing between each other let's just listen to computer to solve problems. Failure is painfull so just ask computer and our work is done. As more and more people would start asking powerfull computer for solution, other knowledge places will disappear due to lack of investments so powerfull computer will buy out everything that's left to eliminate competition. At some point computer gives another brillant idea to reduce our carbon footprint. Just stop printing books, selling computers with hard drives. Instead of that make lightweight terminals that stream knowledge from one place directly into human mind. What could go wrong if we eliminate that computer ?


IMHO, LLMs will increase the amount of programming done, not reduce it. People with absolutely no programming experience or -skills can now produce programming, which was not the case before. Second, as they do this mindlessly some portion of what they manage to produce will have to be refactored eventually, creating more work for people who actually know what they do.


Higher level languages did make the ability to write a program in assembly language far less relevant.

LLMs could end something akin to that, but not programming itself.


My prediction is that it will make programming even more accessible than high level languages did, which in turn will make most programmer salaries take a nosedive except highly specialized roles. Like other many other STEM fields, a master's degree will turn into a minimum requirement if you're new to the industry and want one of the higher paying jobs.

Maybe I'm overestimating how good GPT-5 will be, but it really seems like OpenAI is just about to release the one "low code" solution that actually works.


I had an error in my sql statement today. I forgot to use backtics around the column named "key." Bing and chatgpt both said my query was correct. I realized my mistake and told the model and it said, "ah, yes, you are right."

Like the Demon Cat from Adventure Time, it has approximate knowledge of many things. I think it will need a sherpa to be useful.


Is this the same author? : https://seas.harvard.edu/news/2010/07/matt-welsh-promoted-fu...

He's come a long way from SN. I used once this lecture notes to self study sensor networks.


It's hard for me to not view LLMs the exact way I have seen machine vision.

I remember tinkering with the Cafe Classifier in 2014, and thinking self driving cars were right around the corner. "It's all just a data problem", well it turns out those data problems are really fucking hard apparently.


I don't think you have to go too far down these lines and you get to not just the "End of Programming" but also the "End of Intellectual Work". It's quite gloomy if you think too much about it.


69 years ago this would have sparked hope in the people. But now we know better than to assume technological progress equals societal progress. To me, that’s the sad part.


Well, about ~150 years ago, someone explained that workers owning the means of production was necessary for technological progress to equate to societal progress. Unfortunately, the only people who really seemed to understand this were people who were in underdeveloped countries, and thus at risk of being exploited by others, and elites in the countries looking to do the exploiting. Predictably, because they were more wealthy and technologically advanced, the latter defeated the former and halted all progress toward the general goal of wedding technological and societal progress, leaving us where we are today.


>someone explained that workers owning the means of production was necessary for technological progress to equate to societal progress

Except it's plainly neither necessary nor sufficient.

Not necessary: Every tech innovation developed in a capitalist society that has ultimately improved that society.

Not sufficient: USSR.

>elites in the countries looking to do the exploiting

Reminder: Stalin was an elite in the same country he was exploiting. If you think the societal progress of Soviet citizens was hampered more by elites from some other country than by Stalin's own policies, I'd be interested to know who you have in mind.


>Except it's plainly neither necessary nor sufficient.

Historically, the only times in which workers in developed countries saw gains relative to Capital owners is when they organized into large unionized formations during the Industrial Revolution and when the West was under threat from Soviet Union, both materially and ideologically. Since the collapse of the USSR, inequality has spiked massively and technological progress has become even more decoupled from societal progress.

And you’re correct about Stalin because the country was underdeveloped in its political economy and quickly devolved into totalitarianism under pressure from external threats. This eventually culminated in them attempting to speed-run a form of state-directed Capitalism, with its corresponding class formations (see Bettleheim’s book Class Struggle in the USSR). The same thing also happened in China, although more durably and shifted by about 20-30 years, relative to their revolution. Soviet and early CCP leaders were mistaken in thinking that you could leapfrog Capitalist development and thus both succumbed to its forces, the former externally and the latter internally.


>gains relative to Capital owners

I think "progress" without further qualification must be understood to mean progress in absolute terms, not progress of one arbitrary subset relative to another. A society in which everything gets 2x better for left-handed people and 20x better for right-handed people has progressed materially despite the increase in inequality. (Inequality has its own downsides, but I consider those secondary to differences in absolute quality of life -- interested in any cases you can think of where there's a strong argument to the contrary.)

>under pressure from external threats

I think it's misleading to suggest that capitalist countries do not experience such threats, or experience them to a lesser degree. If you accept that all nations are constantly under pressure from external threats, the fact that the USSR was too is not an explanation for the authoritarianism that arose there.

If it helps at all, I certainly don't consider capitalism a perfect system. The Industrial Revolution created the working class and made its members' quality of life dramatically worse in absolute terms for at least a century, but the benefits we have today are too enormous to ignore.


This is clearly measured by the share of wealth that is generated and returned to workers vs. Capital owners. There’s no need to get philosophical about it as this is all been pretty well documented. Things have become more unequal, as I described.

> If you accept that all nations are constantly under pressure from external threats, the fact that the USSR was too is not an explanation for the authoritarianism that arose there.

I do accept this and note that it is a common pattern across countries, relative to their development. The US became much more authoritarian post 9/11, suspended many forms of rights and freedoms, disregarded its own and international laws. But because it was a more stable political formation, did not devolve into full authoritarianism. Still, within 19 years, it faced an insurrection attempt by an authoritarian leader. The USSR was, by contrast, about a decade old, newly emerged from a civil war, and had very little in the way of solid state infrastructure. It’s thus unsurprising that they devolved into authoritarianism.


>This is clearly measured by the share of wealth that is generated and returned to workers vs. Capital owners.

I'm not sure what you mean by "this". (I don't think you meant absolute progress, since that is not measured this way.)


Societal progress (vs. technological progress) is clearly tied to wealth inequality, as is evident by the US having high wealth inequality and underperforming other developed countries on various other measures of health and well-being (e.g. infant and maternal mortality, educational performance, lifespan, rates of interpersonal violence etc). It’s a crude yardstick, but wealth inequality is negatively correlated with any common-sense understanding of societal progress (“societal” being inherently collective, after all).


Thanks for clarifying, but I see no correlation. If we doubled the wealth of the richest 1% but held the poorest 1%'s wealth constant, do you think infant mortality, educational performance, etc., would worsen for anyone in society? I surely don't.


You might look to the wealth distribution in (say) Australia and the US and life expectancy and Quality of life metric.

The wealthy in US are much wealthier than the remaining population in the US (greater inequality than Australia), the US as a country has lower life expectancy and qality of life reported overall.

Your "thought experiment" isn't particularly realistic, "double the wealth" for the wealthy isn't something that magically happens in isolation.


Indeed, it comes at the expense of everyone else, which is again evident in all these quality of life measures.


I've been contemplating the impact of Large Language Models (LLMs) on our work and if their influence is ultimately gloomy. For me it seems to hinge on whether one is goal-oriented or process-oriented.

For those who are goal-oriented, the focus is on the outcome: discovering new things, earning money, or making an impact. LLMs can achieve many of these objectives, potentially overshadowing the achievements of goal-driven individuals. This could be particularly disheartening when the goals are as fundamental as earning a living.

On the other hand, those who are process-oriented value the journey itself - the state of flow, the challenge of problem-solving. For them, the advent of LLMs is less troubling. It's not the first time a process has been automated away from humans: Knitting remains a beloved activity despite the existence of industrial looms. Enjoying programming for its own sake is unaffected by the efficiency or capabilities of LLMs.

I think it's valid to be upset about LLMs taking jobs. But under the assumption that one's basic needs are met, I think it's a good reminder that focusing on process over goals can make you both happier and more resilient to outside change.


I think it might be the reverse? I think if your goal oriented your not programing for the sake of programing, your programing to solve problems. LLMs are exciting if you like solving problems because you can now tackle things you would otherwise never be able to alone.

If programming didn't help me automate my sys admin work, help me make games, websites and other creations I wouldn't bother with it.


> For me it seems to hinge on whether one is goal-oriented or process-oriented.

That is a luxury of hobbies. Most people are income-driven.


> In effect, large language models act as a virtual machine that is “programmed” in natural language.

Well this seems like >60 mins of life I'll never get back, nope.

This would be fine for small tasks. I'm not going to trust security sensitive work, critical systems, etc to something specified in natural language that isn't verifiably correct in any way.

A virtual machine executing natural language? Nice hand waving.

Delivered by a PhD with a startup invested in AI. The conclusions are surprising. </sarcasm>


We just came out of blockchain mania with a blood nose, lost out on nfts, dissappointed by autonomous driving and sickened by virtual reality and we still keep going. Llms are great and all but can we please calm down a little? Not wasting my 60min- appreciate the warning.


Don't forget the brief period when anyone wanting the favour of the VCs needed to do a chatbot! Ran headlong into Microsoft Tay, and no-one ever talked about it again. LLMs are actually the _third_ AI bubble (after Tay-style chatbots and self-driving cars/over-ambitious CV stuff in general) this decade. Maybe the fourth, if you count the rise and fall of Alexa-type things, though that was a smaller bubble and is stretching the meaning of 'AI' almost to breaking point.

But surely, this will be the one that works out.


>I'm not going to trust security sensitive work, critical systems, etc to something specified in natural language that isn't verifiably correct in any way.

Uh...so people?

Competency is the real bottleneck here not whether you're dealing with a black box or not.


So anything serious will require designing a training program, probably with involvement of pedagogists.

Nice thing about code is, it doesn't let you get away with not knowing exactly what you want. You can still get things wrong, but you can't be vague. Regular coding doesn't give you programs that pause halfway through execution and exclaim in surprise, "oh I though we were doing that other thing", or "why didn't you tell me before?".


"Nobody understands how large AI models work."

I do not possess a degree, I do possess a lifetime of common sense which is the lens I used to review the good Dr's video.

We will become the Eloi.


Anyone think that an AI model will ever be able to write CPU/GPU executable binaries directly to avoid all the BS of translating it to something human readable?


it's pretty awesome that we got to live through a period where it still required a human brain to do the programming.

welp. onward.


Indeed, onward: https://www.imdb.com/title/tt0708414/

Side note, we seem to have observed the end of responsible grammar in communication in recent years as well.


End of programming for easy stuff like Fibonacci functions or calculator apps. Not for complex systems.


I've spent a lot of time with LLM tools this year, mainly ChatGPT and copilot, I have found them to be incredible useful but with clear limitations.

It always strikes me as obtuse when I hear about the job-displacement potential of AI within software engineering or 'programming', and I try to understand where this chorus is coming from. To me it's clear that there is this specific animosity towards engineering and related skill-based professions within the business world, and a ravenous desire to replace the skillset en masse (no doubt because of how expensive it is).

It's important to recognise why good SWEs are so expensive (emphasis on good), and I think it in large part simply boils down to how integral the discipline is to just about every facet of the economy, the value creation potential that it has (demand), and the fact that to do it well is challenging, cognitively discerning and demands constant upskilling/learning of complex technical concepts (supply).

There is a big difference between being a 'coder' who can print hello world, and performing at the level which is required to maintain a high-paying role at the likes of big-tech. In any corner of the world outside of the population-skewed tech hubs, you could walk into a room full of people (the masses), everyone of which would be a consumer of tech products in some way, and almost none of which can/will perform the technical roles required to build them.

With the AI tools, I've found chatGPT particularly useful as a supplement to reading technical documentation due to the conversational format, and copilot as a shotgun-approach which sometimes sparks inspiration and gets me to a solution faster. However, the achilles heel of both is the distinct lack of comprehension for what is being output, especially as they lack longitudinal awareness of the desired output. To me this is best illustrated when trying to solve complex leetcode-style questions which are novel or deviate from existing solutions and require the solver to think through the problem at a macro/conceptual scale before authoring a line-by-line solution.

Is it possible that AI can overcome these limitations? Probably. It remains to be seen how far the current GPT paradigms can be pushed, or if further breakthroughs are needed to reach AGI performance (or at there very least, a version of intelligence which more closely resembles human capabilities). It could be as close as the OpenAI optimism suggests, or much further out as experts like Andrew Ng say.

One thing I feel more confident about however, is that when the time comes that it can truely replace a highly-competent SWE, it can also replace the overwhelming majority of professionals, including traditional middle-managers and their ineffable 'leadership' skillsets. I do not see a version of the future where engineering is uniquely displaced while all other professions remained unaffected. I think it's much more likely that engineering is amongst the last to go.

Ultimately, I think the collective attitude should be that we are all in this together, and we have to seriously architect a socio-economic future for humanity where we can all thrive in a world where the concept of employment is a relic of the distant past.


The world is going to change very quickly.


It is, just not like that.


Wait until the AIs get tired of relearning a new javascript framework every couple years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: