At the bottom of the article is the blurb about the author:
> Matt Welsh (mdw@mdw.la) is the CEO and co-founder of Fixie.ai, a recently founded startup developing AI capabilities to support software development teams. He was previously a professor of computer science at Harvard University, a director of engineering at Google, an engineering lead at Apple, and the SVP of Engineering at OctoML. He received his Ph.D. from UC Berkeley back in the days when AI was still not playing chess very well.
It's still a definite conflict of interest; the author is making an argument that they have a financial interest to advance regardless of how true it is
If this article was some sort of “Don’t worry, the AI-enhanced world is gonna be just fine!” puff piece then the author’s credentials here would take be quite discrediting. I think it takes balls to start an AI startup and then post an article urging caution about AI and saying that recent developments should “scare the living daylights out of people like Nick Bostrom… who are (rightfully) concerned…”
Like sure, it’s in his interest to portray AI as being powerful. But this article felt pretty candid about what the effects of that power could be.
It's the exact opposite of take balls - it promotes a stance that is in direct alignment with his startup's business model, a pretty strident conflict of interest for an ACM article. It's a fire and brimstone treatise and he's the prophet that is here to shepard you to the promised land.
Seriously, this is the text from the home page of Fixie.ai
"We're setting out to change the way the world builds software, using AI as a foundation.
We're founded by a team from Google and Apple with expertise in AI, systems, and the web. We're funded by Zetta Venture Partners, SignalFire, Bloomberg Beta, and others.
We're hiring for multiple roles."
Yes, he is a highly competent and experienced individual, but so were legions leading up to the AI winter. Can't place much faith in someone who has something to sell with much as confidence as a ChatGPT response - an academic no less - in regards to the near-term future of this space.
I’ve always felt that the actual code writing is the least interesting part of coding. It find it fun, but feel that the real engineering is largely the stuff that comes before (and some after) the code is written.
I’m not yet sold on the idea that it will replace coding. It’s like autonomous cars. 99% is far far too low. But it’s also not to be dismissed. A brilliant tool with many use cases.
I think we have invented the Enterprise “ship’s computer.” It won’t answer “how do I solve this problem?” But it probably will answer things like, “if X and Y and Z, what might W be?” We get to be the big brains standing around a terminal! I can’t wait to have a holodeck where I can construct a world just by asking for things.
Yes, but I think writing code is more important than people make it sound. Because, writing something that is executed correctly is easy. But writing it so that others can understand it later, change it easily and safely, having a compiler to make sure nothing goes wrong - while still having the computer execute it correctly - THAT is the real challenge.
Which is also the problem with any AI. If the AI can program, then not only does it have to write code; it will also have to read existing code and make adjustments to it, based on request. This is the real challenge. "Hey, I have this 1MM line software, please change feature X to do A on top of B but keep C working". If the AI can do THAT, well THEN that's the end of programming by humans and then I'm gonna escape to a small island ASAP because the singularity will be soon after.
I think, if possible, the faster way is to skip lexing and parsing all together for the AI input+output, it's a waste of resource. Probably start fresh with a new kind of assembly and AST, so the AI could have a better learning data and approach.
True. I would imagine training AI to understand assembly first, make it write a higher level simplest building block like `foldl`. And make the language pipe-only. No closure or anything like that, just pipe it all the way down, the machine doesn't need "naming" or name based id, now refactoring is easier not to break thing. etc.
I generally agree. I think that today it’s a brilliant tool for throwaway code. Like when you just want the output. Especially if the output can be verifiable or is okay to be fuzzy.
I have done some experiments with text-davinci-003 that given a program update and directory list can select the files, then do the requested update. Its not a million lines.
As someone who tried to feed a 50 line code base into ChatGPT only to watch it fail with very hard errors (no I don't mean it answered incorrectly, it crashed), I am not very impressed.
The author seems to forget that while an AI can create text that looks like code, it can never prove the code is working correctly (see halting problem).
As with many things, AI will excel at the simple tasks that just require pattern matching and generation to work.
Nothing more but also nothing less.
People get carried away by how great AIs are at spitting out complex sentences from a huge training set that look like a human would have written them.
Winter v2 is coming.
Besides, there's no training set for the specific domain problem I'm currently solving. It is - by definition - not generic.
The problem domain is not bubble sort or fizzbuzz.
> it can never prove the code is working correctly (see halting problem).
The halting problem doesn't say anything about whether a program could prove things about another program, just that it can't do it for all programs.
Lots of programs obviously do or don't halt, and you can prove it in a couple lines ("there's an infinite loop here and every execution will enter it" or "all the loops are bounded by constants" etc). There's nothing stopping an AI from noticing things like that just as much as humans can. For more complex programs it obviously gets much harder, but it's very hard for humans to prove things about programs too!
I don't think the halting problem's got anything to do with it, because I certainly also can't prove, in general, whether some code I have written halts or not either but that hasn't held me back from churning out vast quantities of text that, at first glance, might also resemble good code.
But I agree with your sentiment: to get AI to write the code you want, you will need an engineer to work with it.
I do expect that future AI code generation models will learn more about a specific code base over time, partially overcoming domain specific knowledge gaps.
You can reason why or why not you believe your code is correct, and ultimately come to an understanding if your reasoning proves correct or incorrect, but AI is just working off of patterns from its data sets.
I am interested in the convergence of AI-generated code and programs-as-proofs: can I work together with AI using a language with a highly expressive type system such that we co-develop a specification in types and properties, then co-develop the implementations that fit the specification.
It's not practical to do that sort of thing alone, as there's a lot of tedious nonsense involved. Maybe it will be more practical with an AI assistant.
Whether current AIs are more or less somewhat close to the basics of how humans might be approaching code writing is more of a philosophical point, but a "translation room" view of the human mind might on occasion make me feel like I'm just working off patterns from my data sets. :-)
LLM's aren't just "matching patters", they demonstrate emergent behavior, allowing them to do much more than what they were explicitly trained on. At the moment AI models might not be able to utilize "train of thought reasoning" like we can, however the distance between these emergent behaviors and "reasoning" is much shorter than we realize (and will continue to converge in the future).
So, do you really think these computational AI systems can reason and solve problems equivalently or in a manner superior to humans? If they aren’t matching patterns, which are I think the literature makes clear they are, why does ChatGPT fail at solving logic problems so badly? It is already performing impressive enough at tasks that people think it can replace all knowledge but yet if fails miserably at logic problems and even those that claim it replaces google do not really seems to realize it fails at providing answers to basic facts. I really don’t see any evidence that this type of AI, which is quite advanced for what it is, is type of AI that dissolves technical education / jobs. Another question, if this is on the road to being human intelligence, how is it someone like Ramanujan received no formal schooling and yet could create solutions that seem very far off from the capabilities of these systems which we are claiming will replace humans? Further, if it’s intelligence is so superior, why does it require so many watts of power produce so much thermal by-product when beings that actually do perform in a superior way currently require little more than a bowl of rice every few days to continue solving problems? Is there not sone profound, “emergent,” difference in sophistication?
I think what you’re missing in a lot of these comments is that AI doesn’t need to operate at the peak of human intelligence like Ramanujan in order to absolutely gut a lot of job markets.
Like, most developers aren’t writing proofs that a series converges to pi or building Apollo modules where everything has to work perfectly. I’m not gonna act like what we do is easy, because it’s not. However I’ll never forget the day I wrote a WebGL shader in GLSL with Copilot’s help, and then clicked over from a GLSL file to a JS file and Copilot immediately wrote the code to create a THREE.ShaderMaterial, import my shader (with the right file path) and filled out the uniform names and types perfectly on the first try. It took under 2 seconds, and at the datacenter maybe a fistful a watts, and it left me fucking speechless.
I’ll admit it’s pretty weird that these models can write working code in a multi-language codebase but can’t add two four digit numbers. They’re almost savant-like the way they fail at basic tasks while excelling at more complex ones. But given that they are in fact excelling at complex tasks, I’m worried about what could happen if, say, every enterprise dev shop laid off 60% of their headcount and kept the remaining 40% just to do code review. That would be very bad for everyone in the industry.
I’m not missing that. The point about Ramanujan is to show there is something fundamentally different and at a massive scale of difference between human intelligence and these AI systems. It’s not that average developers are math prodigies. It is generating code based on predictions from a large corpus of training data. How well does it do at debugging that code (assuming the bugs / code it finds are not the exact kinds posted to Web sites thousands times)?
Agreed. The article also speaks about engineers using these LLMs still as a tool - not necessarily replacing humans, but humans wielding them as tools. Copilot is a great example.
Dinkum raised a great point about the models spitting out falsities or having other issues. We still will have and need human-in-the-loop engineering so that we can discern which outputs are true (or not)!
I do not share your optimism. Right now we are seeing what AI can do with one hand tied behind its back. In my experience, when Github Copilot makes mistakes they are frequently ones that it could’ve avoided if it ran static analysis on its suggestion and realized it was mentioning variable names that aren’t in my code. If the next version of Github Copilot integrates with basic VS Code tooling like Intellisense, it will be insanely powerful.
Like, I see a lot of people saying things like “It’s just pattern-matching, it still needs an engineer to guide it”. That’s true of this version of Github Copilot. If it improves at the same clip as some of these other large models, I suspect it will become much much less true in a short time horizon.
It’s bad at solving logic puzzles today. We’ve just spent a decade-ish watching AI leapfrog over hurdle after hurdle that people said were going to be “impossible” or “would require reasoning/creativity” or “are at best 10 years away”. It feels foolish to assume we won’t see another unexpected leap.
I would be unsurprised if GPT-4 or 5 suddenly becomes really good at math and logic problems. And long before that I expect AI to be hard to distinguish from a human in terms of day-to-day executive function stuff.
Everyone essentially doesn't know. We are but extrapolating something we don't understand to either future failure or success depending on a gut feeling. Some like you are saying to expect the unexpected while others are saying to expect this to stall out. I've even heard the openAI CEO couch his predictions behind 'assuming we don't stall out'. In this article and elsewhere it's been pointed out that how core abilities of the large language models are not well understood. Without this understanding everyone is just guessing at the future.
Why would a language model like this even be good at solving logic puzzles, even GPT-20? I’m not sure west specifically knowledgeable people said would be impossible ten years ago that has been proven wrong now. Right now it’s not even good at answering facts which it has been trained on, let alone actual reasoning. Why does Sky-Net seem more likely than this specific technology having a plateau?
> The author seems to forget that while an AI can create text that looks like code, it can never prove the code is working correctly (see halting problem).
Just equip it with a paper tape, give it the current cell and state as input and use the output to decide whether to move the tape or write to it. I'm sure you could encode a universal TM in a handcrafted neural net this way.
Maybe it's cheating, but after all, this is the only way humans can do universal computation- we can't hold an infinite tape in our head either, and neither can a CPU, we have to give it sufficient scratch space to act as the tape.
Alternatively, perhaps there's a (very large) neural net that can prove things about Turing machines that aren't too large. It only has finite input and finite output (it's not Turing complete) but it can prove stuff about smallish Turing machines, providing the proofs aren't too long. That seems reasonable, because that's what humans do when we prove stuff about Turing machines! Perhaps neural nets could never actually do this, either they're fundamentally not capable or we never work out how to actually find one that does, but it seems possible?
They aren't a subset of reasoning, they are data points that could be reasoned about (like whether it could be recognised as something known as a notion of <statistical correlation> and whether it make sense from a semantic point of view). Data points could be statistical or non-statistical, and it's the reasoning mind that distinguishes between the two alternatives based on other notions of the world phenomena that make statistics distinguished from non-statistics.
A method of producing statistical correlations is a product of a reasoning mind and could be thought of as a subset of reasoning (if by "subset" we assume "everything produced by a reasoning mind via an act of reasoning"), but in order to recognise this "subset" another reasoning mind should firstly internalise the notions of statistics that are external phenomena to an act of reasoning itself. And "the act of reasoning" isn't proven to be "just something that produces correlations", it's more than that and nobody knows what exactly it is. Otherwise AI would be solved long time ago.
Is that really how you reason, though? Do you think you really reason by having some kind of subconscious statistical computer in your brain somewhere?
My understanding was that it does a limited form of reasoning via "train of thought." Maybe it can only transform the ideas in the ways it was trained with but when you break it down there are only so many primitives here.
[UI -> training models AI -> product] is kind of a wrong way to me.
I'm old-fashioned about AI, over decade ago I was fascinated by Minimax heuristic in a game as a bot. It looked really smart I can't beat them in a deepest tree the machine can support (less pruning). My point is that the key is [UI -> "heuristic" + [business]logic -> product] is the way they should utilize these new neural network based AIs. Don't spit the product out directly because the product currently is looking like "heuristic", not a usable thing in business. Try putting logic in front and behind, especially put it far away from the final product so that the result will look like a legit magic.
If anything it's rather the start of programming in my opinion, or rather the start of a new era.
We build endless higher level abstractions ontop of each other in programming, this is just another one.
I'm not bullish on AI actually understanding something in the near future and it'll rather continue to be something more akin to mimickery, albeit amazingly expressive and accurate.
I think this is rather going to become an amazing tool to help reduce repeating already solved problems. But humans would still be needed to plumb it together and adjust it to meet some final need.
If the AI can do the whole thing then whatever you're trying to create probably already exists and there's unlikely to be a need for it in my opinion.
Can someone that understands ML better than I tell me if there is a point where the AI can indefinitely train on data generated by other AI? If AI is trained on human development work product and then it eliminates human developers, will the capabilities of the AI be stuck indefinitely at the level of the software from which the models were trained? Not sure if I'm making sense, but the crux of my question is: can AI effectively generate their own training data sets? If not, then I don't see how it could replace an industry.
> [Ben Weber] set about organizing a tournament for StarCraft AI agents to compete against each other, hoping to kick-start progress and raise interest.
> The announcement for the tournament was made in November of 2009, and the word soon went out on gaming websites and blogs: the 2010 Artificial Intelligence and Interactive Digital Entertainment (AIIDE) Conference, to be held in October 2010 at Stanford University, would host the first ever StarCraft AI competition.
[...]
> the only way to really test and improve the agent would be to play against skilled human players. Flush with pride that the agent could defeat the built-in AI, we played a game during the class against John Blitzer, a post-doc in Dan’s group who played ranked ladder matches on International Cyber Cup (iCCup).
> It was a disaster.
[...]
> Manually iterating through parameters and making adjustments would take far too long, however.
> Instead, we let the Overmind learn to fight on its own.
> In Norse mythology, Valhalla is a paradise where warriors’ souls engage in eternal battle. Using StarCraft’s map editor, we built Valhalla for the Overmind, where it could repeatedly and automatically run through different combat scenarios. By running repeated trials in Valhalla and varying the potential field strengths, the agent learned the best combination of parameters for each kind of engagement.
[...]
> Recruiting Oriol as our “coach” helped us apply the final touches. Oriol had played StarCraft at the pro level before retiring and turning to a life of science, and he joined the team as our coach, designated opponent, and in-house StarCraft expert.
> With a high-level human expert to test against and all of the algorithms in place, the agent progressed rapidly in the last few weeks, culminating in that first victory against Oriol mere days before the final submission.
> Like OpenAI, DeepMind trains its AI agents against versions of themselves and at an accelerated pace, so that the agents can clock hundreds of years of play time in the span of a few months. That has allowed this type of software to stand on equal footing with some of the most talented human players of Go and, now, much more sophisticated games like Starcraft [2] and Dota [2].
Note that they are lucky there to have a controlled environment, meaning that experimentation is cheap, with clear goals - something that is not always the case in "real" life !
> Fast-forward to today, and I am willing to bet good money that 99% of people who are writing software have almost no clue how a CPU actually works, let alone the physics underlying transistor design.
I dont fully agree with this. A lot of folks in systems land have mechanical sympathy and deeply think about memory, IO and processors. Things are mostly built upon underlying abstractions. With AI becoming mainstream, some of the abstractions will be pushed down and some might evolve further.
Welsh was a tenured Harvard CA professor until 2010. While there may be some hyperbole, it seems like he’s speaking from some experience of understanding a top tier undergraduate curriculum (and if you don't consider Harvard top tier, he did his PhD at Berkeley before that). My guess is that his bar for “understanding” is higher than yours and the other commenter who suggests 50% of programmers understand CPU design. Even if you took an upper division HW design class 20 years ago, a fair bit has changed and there’s a good chance you've forgotten a bit since then.
He can be tenured all over the place but it doesn’t make his statements true. I think this is too charitable. I think he is clearly trying to make a very sloppy point that “99%” of programmers know nothing about hardware and are just making web apps or something to that effect. I think it’s too charitable to suggest his statement there is some ultra expertise bar he is clawing at. He’s not talking about quantum physics or anything like that; I am sure he is trying to convey that 99% of programmers don’t know surface things like branch prediction or what a super scalar pipeline is. He’s not talking about the intricacies if Sandy Brudge or anything like that. And, he’s wrong. He’s just trying to make a hand-wavy point abs relies on his pedigree to be taken seriously.
He may be right in saying that 99% of devs don't know what's under, but the 1% remaining is still a larger crowd that the total amount of devs of the 80s... and they are still essential to the industry.
I understand how stuff works all the way down, but modern CPUs with register renumbering/coloring are mind breaking, then add JIT to things, there are a lot of tricks being used to make things faster that are beyond my stack depth.
I think it depends on how high our threshold for “understands how a CPU actually works” is. I mean sandybridge had 19 pipeline stages, I certainly couldn’t list them all off the top of my head and describe what they did!
Hm. I mean at the level of "you sandwich some doped semiconductors so that charge at one point controls the current through two other points" I was sort of thinking yes... but I do have a tendency to overestimate what people know and upon further reflection you're right, 50% is way too high. Can haz 30%?
This % goes waaay down if you're talking actual physics : just how quantum physics result in a semiconductor band gap, and what equations you need to write down to explain the electric behaviour of that sandwich.
I can do that, at least for the basic layouts, but my own lack of (clear) understanding would be in the "middle" of the stack : starting up from logic gates, and bottom from scripting programming languages.
Nah, these tools will empower programming. Almost no one writes assembly any more. We've got high-powered tools that already abstract away a ton of software engineering complexity so people can do what they want to do quickly and inexpensively. But I have a very hard time believing that "programming" — the act of writing out precise textual instructions in a file for a computer to read and execute — is going anywhere. It's a very elegant and powerful means of interacting with a machine. Similar to how the written word remains one of the most powerful mechanisms for interpersonal communication despite the massive and powerful media tools at our disposal.
These requirements need to be precise, unambiguous, and complete.
The AI could help the requirements writer to “fill in the gaps,” but the main onus is still on the author of the requirements.
As mentioned, we don’t program in machine code, anymore. Maybe the result of an AI-assisted construction would be machine code, but it would take an AI to test, debug, and maintain it.
I know that every C-Suite denizen has been dreaming of getting rid of “annoying engineers,” for my entire career, but that won’t happen, as those requirements will look a lot like … code … and I guarantee that the C-Suiters will have zero patience for writing it.
AI will write the requirements. In fact, I wouldn’t be surprised if AI comes for high cost centers like programming and high information decision-making (the C-Suite) at roughly the same time.
We could spend years in a lopsided state where groups of investors fund an AI that operates on an investment thesis, delivers commands to humans who manage physical labor in areas that have been tough to automate (like the remaining Amazon warehouse jobs), and handles on its own all of the work that would normally be done by office employees.
With any project, the first step is the customer describing their business needs. Analysts turn those requirements into a complete set of specs, then programmers write code that performs the specs.
There's no way AI is going to take the place of the customer, since the AI doesn't know what the customer needs, nor will it take the place of the analyst since even the smartest AI can't deal with a customer who is unable to clearly articulate their requirements. Hence the need for human analysts.
The AI might be able to help the programmer turn the specs into code (see Copilot) but it will always be hamstrung by not fully understanding (as a human would) the actual requirements.
With a sufficiently advanced and independent AI, the business need would be "make money", or perhaps hopefully "make money while obeying the laws, acting ethically, and doing something publicly beneficial".
I think people are making pretty wild assumptions. But, ok if the AIs are these as advanced then why would there be investiture funding AI, the AI systems in the world would be just be capable of solving any problem abs no investment. Why would there be investment? Presumably, anyone with access to an AI (and presumably “open AI” will exist), therefore anyone could easily have any conceivable software written in seconds (if that long). Furthermore, what would be the need for software as the AI would have taken all the jobs for needing software on the first place. Would it just be to run the farm equipment for our “post scarcity” whatever?
I would love this to happen because finally I can have my own one man paramilitary industry like Tony Stark.
Then I just need to procure the right assault weapons. Then I will be (at least my bunker will be) unstoppable. No need to hire mercenaries anymore.
Maybe one day it will get even better, that I can have my own attack units like robot dogs/personal tanks equipped with insane amounts of assault weapons and javelins. Then I can mount an attack against anything. A person, an organization, a city, a small government. A true one man army, with AI controlling everything.
I’m guessing this is a troll or are you suggesting you would be the only one with this AI? If it’s a troll then great because I think people are leaving their brains out if these conversations about AI / chatgpt.
As it stands I think this will entirely won't work. We can't still to this day define what we want a program to do. So much in successful software development happens in the journey of making it. The constant shaping and add features and product and engineering finding users do unexpected things so we then try and capitalize that.
Waiting for Monty PAIthon, when AI will develop a true sense of humor and we all will laugh about the programming bloopers which it will invent.
But there is also a great danger of the Killer Joke, which results in instant death of anyone, who hears that joke. A malicious AI can re-invent the Killer Joke, and exterminate the humanity.
>Yet I think it is time to accept that this is a very likely future, and evolve our thinking accordingly, rather than just sit here waiting for the meteor to hit.
You know, I would love for all business app development to die in a wretched fire of scum and villainy. Not that it’s bad, but it’s probably some of the most mundane work that programmers could do. The people giving out busy work or bullshit jobs won’t be able to affect actual people. They’ll just have AI do it, which is great!
sure, thats why those systems take natural language and produce code. But the language is used to tell the system What the program does , not How to do it. AI systems tend to be good at filling the blanks with sane default behavior.
New technology taking my job is the least of my concerns here. In my opinion there is simply a bifurcation of software underway: traditional programming on one side, and training/learning-based technologies on the other.
Traditional software will continue to be chosen when we want predictable, unbiased, mechanical execution of instructions. There are many areas where this is preferred, and I don't see that changing. Mechanical and later silicon calculation devices are invaluable for their speed, but the greatest benefit is that they are predictable and consistent: they do not make errors unless the design is in error.
AI, machine learning, and other training/learning-based technologies also have many useful and tantalizing applications. For applications such as those that enhance productivity, provide entertainment (e.g. art and music), or autonomously perform tasks where mistakes can be tolerated, these training/learning-based technologies will reap great things.
However, for many applications we don't want a complex device, whose behavior, while it can be ostensibly tested, cannot be completely understood and examined to be provably correct. Or, whose faulty action cannot be definitively reproduced and root-caused after a mishap. Or, whose 'black-box' can be infected or influenced by bad actors in a manner that is undetectable.
I don't ever want to see a radiation dosing machine that is clever, an industrial control process that is expected to be trained to infer its own decisions where injury or life is at stake, nor do I wish to argue with a machine to open my pod bay door.
Alternatively, perhaps legal precedent will just establish the degree to which machines are allowed to make mistakes, and if they make fewer than a human, we will just accept the cost/benefit of injury, loss of life, or evil as 'practical', and move on. 'Actuary Shrugged'?
The most ominous prospect is if humanity fails to evolve past war and conflict faster than this technology's destructive capability. Maybe Fermi will get his answer.
FWIW, I'm not convinced that this piece will be right for more than a few years into the future. I think it's an interesting discussion piece, though.
Context matters a ton and a lot of programming is understanding context and requirements and goals and needs and economics and those, while trainable, will suffer from the slowness and lack of richness of the I/O between the real world and the model (this interface is not improving nearly as fast as the models themselves).
They make the (common) mistake of equating “programming” to implementing some algorithms, sure AI probably has down sooner than later. This is a small part of what “programming” has come to be though.
This article makes me think of two other articles which discuss the matter. One by Bartosz Milewski[1] and one by Eli Bendersky[2]. The main idea of both is that programming can change significantly from what we do today but never will be obsolete. It just will be on an upper step on the meta ladder.
When I started programming as an intern 7 years ago, I asked simple questions around the office and they told me to just google it. My approach was to understand every single bit that goes on, but it seemed like nobody truly knew what they were doing.
Fast forward to today, the new programmers I meet don't truly understand what the code does. If a problem pops up, the first action is googling it for hours. Very few people have this high level structured thinking ability to filter out the noise to distill a problem.
I believe Ai will accelerate this, where programmers will know even less about what goes on in the program and struggle more when stuff doesn't go as expected. Over the years, google became less useful as all the SEO spam took over. I noticed how people couldn't come up with solutions on their own, as a google search yielded no results.
Now I read these articles every other month about another tool, framework or article announcing the end of programming as we know it. Nothing ever happened and truly experienced developers became even more valuable.
If we cross this fine line between aiding and replacing developers, we set ourselves and the next generation up for a bad time in my opinion...
"We are no longer particularly in the business of writing software to perform specific tasks. We now teach the software how to learn, and in the primary bonding process it molds itself around the task to be performed. The feedback loop never really ends, so a tenth year polysentience can be a priceless jewel or a psychotic wreck, but it is the primary bonding process--the childhood, if you will--that has the most far-reaching repercussions."
Bad'l Ron, Wakener, Morgan Polysoft
Accompanies the Digital Sentience technology
"'Abort, Retry, Fail?' was the phrase some wormdog scrawled next to the door of the Edit Universe project room. And when the new dataspinners started working, fabricating their worlds on the huge organic comp systems, we'd remind them: if you see this message, always choose 'Retry.'"
Bad'l Ron, Wakener, Morgan Polysoft
Accompanies the Matter Editation technology
I will wax a touch philosophical here, but I believe perfect systems do not exist unless they are purely within thought. Implementations will experience failure due to both its physical and logical components, logical in the sense due to unforeseen n-th degree effects. This is when expert knowledge is needed, unless you have already designed such a generalized model that captures, taxonomizes, reacts, and optimizes for all events until the end of time. From an educational standpoint, who cares if you know how to add a node to a binary tree using C++. It's not the technical details but the struggle to understand recursion until it finally clicks. It's the sculpting of the computational mind. Until the essence of controlling and directing computation by computational machines themselves is satisfactorily solved (for then you've become god), then no, expert humans will always be needed, just not in abundance.
Throwback to Keynes forecasting that we'd all be working 15 hour work weeks.
But if you can’t understand how to add a node to a binary tree that what does having a “computational mind” even mean. It sounds like feel-good happy-talk nonsense.
The point I'm trying to make is that for certain technical concepts (mathematics, programming, etc.), it's not the technical detail that must be remembered. Sure, you forget exactly the syntactical details of implementation, but you do remember (assuming you understood it at the start):
1 - How do I traverse the tree?
2 - Is there any ordering required among the nodes so that my traversal is 'correct'?
3 - How do I maintain such an ordering and establish correctness after my new node is added?
4 - Can I make any improvements so that locating the position of the to-be-added node doesn't take forever?
You don't begin to touch on these higher level "computationally minded" ideas until you can understand the primitive action of adding a node. And once you've grasped it, you forget the primitive details, only the essential concept remains.
The author seems to suggest that the next generation doesn't need to make an attempt to understand the fundamentals. I argue that unless you're a genius, you need to first add a node to a binary tree before labeling yourself a computational thinker.
In the past "programmers" used to write raw assembly. Now the compilers do this, and programmers write source code for the compiler. In the future AI may write the source code, but we will still need to write higher-level specifications, which will likely be much more detailed than "develop a mail client app" or "develop an FPS".
Even today, business managers who have programmers to do all of the actual coding, need to write detailed specifications (and when they write bad specifications, they get bad products); those specifications are in a sense, "code". But even those specifications are not detailed enough: when refinement is fast and cheap (with AI doing the coding) you really want to be able to customize the UI, add various features, properly handle various edge cases, etc.
There's always going to be a programming layer, at least until AIs can self-improve, and at that point it's basically AGI. In other words, programmers will be needed until the "singularity" and then we have no idea what happens.
I tell you what will happen. Masses of people will worship it, just like people worship large companies and Tesla. The company will create a nice, lovable mascot for their AI, while it sucks the marrow out of people, but they will not care about it all, because it has an overwhelming smile, therefore it cannot be evil. :D
or
A committee is formed whose only purpose is to hit the AIs that have become too big with an oversized wrench and kick it back into a madmax style desert, so it can evolve in a different way.
Unless we get a good brain-computer interface you won't be communicating precisely and clearly to AI what program it needs to write. It would be like ordering a SmartPhone online and receiving PhoneSmart Teaching Telephone.
If we do go to more ai contributed code fragments, we still need to test them for correctness. Given that we already see compelling looking but bug ridden solutions from chatgpt as well as from humans, I still see the test as crucial. If an ai writes the test, I'll skeptical. I'm skeptical of tests written by humans too.
Maybe the key is testing and coding can be seen as adversarial actions, and there is benefit in separating them. If the ai or code generator or whatever you want to call it writes the code and test, I'm even less likely to trust it.
> Programming will be obsolete. I believe the conventional idea of "writing a program" is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed. In situations where one needs a "simple" program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand.
What is impossible to do then is something like convex optimization models for guidance of rockets. You need to be able to mathematically prove that the output of the program will always converge to the solution. You have to assume that the inputs to the algorithm could get scrambled by ionizing radiation on one cycle and that the next cycle it'll recover completely. You want hard mathematical proofs. You don't want a black box that has been trained a whole bunch and might have some sharp hidden edge condition that you'll never know exists until just the right input hits it.
I can definitely see this being useful in some contexts i.e. where the problem can be easily defined. A good example is a bit of glue or plumbing code where one system with a defined interface has to talk to another one.
I also see it failing a lot. There were lots of promises made about 4GL languages that didn't turn out to be true. They made for great demonstrations but as soon as you push past the boundaries a bit things get tough and you need an actual programmer to make things work.
The guys that came and demonstrated PowerBuilder[0] back in the nineties had an absolutely stellar demo, but the devs who were given PowerBuilder to work on at my workplace quickly got mired in details that brought them undone. I feel like the same thing will happen with anything non-trivial that AI generates. The re-work will be more effort than just building whatever it was from scratch.
This is actually why this whole topic is overblown. The AI can manipulate text, ergo it can replace programmer's because they can only manipulate text.
It won't replace a janitor because it has no hands but then again, if the model's only claim to replacing programmers is that it produces text, then the article is pretty weak.
Fair, I was less thinking a physical dial and more a metaphorical one. It wouldn't be able to do the computation that is required to do the right thing, even if it can produce a program that can do the computation.
You could hook the program up to a python interpreter and approximate that, but then you're still generating code, not doing the task directly. In order to have an AI that will do the task directly, we would need to train a different model, not just plug the one we have directly in to the task.
After using CoPilot for about 8 months, I still find it useful. I still think it is worth $100/year.
I have experimented a little with coding and ChatGPT and it seems impressive at first looks.
I had a funny social interaction while on a hike this morning with three friends who are all semi-retired videographers. They seemed enthusiastic enough to hear about automation for software development but when I brought up my continued joy at automatic photo/video/added-music mixups in Apple Photos (‘memories’) they didn’t like that. One friend categorically stated that AIs will never be able to adequately edit video, etc. in postproduction. I think he is wrong but I let it slide. I travel a lot and have a ton of digital media assets and I so very much appreciate the automatically created mixups. I watch every mixup and flag about half to keep forever.
The world in which a LLM has destroyed the job market for software development is one in which most other knowledge work is in a similar state. The only safe knowledge workers will be those with a defensible regulatory moat (law, medicine, etc), and even then their jobs will be profoundly changed.
I am currently building tools for programming and modifying programs with natural language using the OpenAI API and their relevant models (similar to ChatGPT).
Especially if you combine the text/instruct model with the coding model and then give specific instructions, it is able to complete many simple coding tasks without me opening an editor.
Right now I am focused on something like Codepen but with English specifications only.
I believe that I should immediately start leveraging this type of tool in my other projects. Similar to the way you would use a calculator or Google Translate.
I believe over the next few years the models will continue to get better and also start to incorporate visual understanding and better reasoning. The point at which it I would consider it poor software engineering to write programs manually is rapidly approaching for many domains.
That article portrays a very bleak future. I can’t imagine programming will be generally useful if it needs a massive AI model that can only run on machines that cost an enormous amount of money, or cost you some cents for every word it generates using an API.
I use copilot pretty much religiously at this point and would be shocked if what the author is claiming happens in my professional lifetime. It just seems so far away. But I wouldn't mind if it did happen, I am much better at my job because of AI.
In the short term(2-3 years), I think, the hybrid(programmers give high level designs and instructions + AI models implements them) way of programming will become increaseingly popular. Nowaday's LLMs have shown some suprisingly abilities of reasoning, remembering, and imitating, but, for now, they are still not good enough to write competent code alone, especially when it comes to complex fields like system design. Things like system design need a lot more sophisicated thinking process than implementing some specific functions.
Daniel Kahneman Thinking Fast And Slow describes some cognitive biases... these AI tools churn out text (ChatGPT) or code which look convincing so we think highly of them but in reality, there's nothing. No, AI tools won't take away our jobs any time soon, because we are paid to think even if the end result happens to be code, that's not the point. Especially in debugging, AI is hopeless.
>A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures in an instant with unflagging confidence in its own correctness on any topic, without concern, regard or even awareness of the level of expertise of its audience.
I remember what my boss told me a few days after I started working, just out of school.
"If you want a career you should move away from programming as soon as possible, those days are over, we're already testing automatic programming systems, it's a matter of months before we use them in production."
It was 23 years ago.
I think that the current state of AI will be sufficient to build excellent assistant tools, and I can see some productivity increase in some areas.
But for anything more advanced, let's say I am not worried.
I'd agree that "the current state of AI" will be assistants, abeit powerful ones.
With that being said, we've seen incredible increases in the ability of AI, even in the last 5 years. We're approaching human-level ability in NLP quite fast - and will surpass it soon. I (personally) don't believe that the past 23 years will be a great indicator for the next 23 in terms of AI. Some of the driving forces in the past 10/15 years are: hardware (we started using GPUs), Transformers, and massive - and I mean massive - increases in training data, fueling a nonlinear progression.
Why do you say we will surpass human level abilities in NLP soon? Humans that literally use diapers can process quite complex language abs consumer far less watts of power than AI to date and do not requiring consuming an Internet amount of data to do it. Human “training” consists largely of hearing a tiny amount of other humans engage a very limited amount of conversations and nursery rhymes / songs. I’m not sure how this displays anything like human level ability.
These computational advances in AI are good. However, I am still waiting for AI to "synthesize" knowledge that is as yet undiscovered through a combination of theorem proving, reasoning, AI search techniques and so on.
It is not just AI "explainability" but techniques for exploration and fact-finding. For eg: could an AI system prove or disprove P=NP.
It is a little premature to dismiss human endeavor while such large gaps in our epistemic knowledge exist.
So far I'm not impressed. If you ask these chatbots for something simple it usually gives you wikipedia/stackoverflow levels of answers with lots of edge conditions because they've been trained on so much garbage. And if you try to ask it for something difficult then it doesn't know how to produce it. And I've tried to make ChatGPT do physics and it keeps on failing on basic dimensional analysis.
Programming is one of the more complex and exacting mental activities, so if this is true, it is not only the end of programing.
AI progress is astounding and the results impressive, but they still seem to be 'fuzzy', almost like an instantiated dream generated by the input. Sure, as an assistant it, is powerful, but it still seems to be missing something to be able to provide a complete abstraction between people and code.
The moment he clarifies about his position on the AI market makes him another seller deeply trying to sell his product. It’s like those articles from 90’s and 00’s speculating the end of the C language with titles like “C is dead” or “C is going to be replaced in the near Future”. The year will be 2060 and C will still dominate the market.
Laws are very open to interpretation by a judge. Basically any adjective, unless defined elsewhere in the law text, is open to be interpreted. Judges use thick books of example cases from the past. I think it will take a long time before their skills of interpretation of the law and considering past example cases can be replaced.
I see my lawyer friends going through hell working 80+ hrs a week thanklessly doing those things. I just hope that it doesn't replace good lawyers, but rather actually allow them to have sane lives instead of burning out less than five years after graduation
First are jobs doing art, writing, and music. I never would have thought that a few years ago, but now they seem feasible and they stand out as fields where correctness isn't important and anyone can judge whether something feels right.
Writing code for what? Instead of writing code for e-commerce or whatever sites, why not just have AI tickle our pleasure centers directly and do just enough to keep us alive. Seems easier, and so much more efficient than writing code to make things and then ship those things out to us for a few hours of mediocre enjoyment.
Is AI doomed to always be a version behind the target framework/language? If a framework API is updated or if a new hardware feature is launched, it will take a little time for the AI to catch-up right?
I imagine it’s not just about having API examples, you would want real codebases and use cases for training.
Doesn’t google or someone have an ML model that lets you tweak something like stable diffusion with a small amount of specific pictures to train it on something not in its data set?
Presumably the same type of thing could work for code.
If this prediction is true (I’m skeptical), it could be a boost for personal computing, assuming everyone could run (or rent) such a model, because it then would become easy to design your own software or user interface by merely explaining what you want to the AI.
People have a been trying to do this for CRUD apps for years. Now they want to use AI to do the same thing. Somehow, I am not convinced developing an AI to do this is as simple as it sounds.
Programming as we know it. I believe the future will be to write or design program specifications and the AI will write the code. We will be more like program architects.
> Matt Welsh (mdw@mdw.la) is the CEO and co-founder of Fixie.ai, a recently founded startup developing AI capabilities to support software development teams. He was previously a professor of computer science at Harvard University, a director of engineering at Google, an engineering lead at Apple, and the SVP of Engineering at OctoML. He received his Ph.D. from UC Berkeley back in the days when AI was still not playing chess very well.