Hacker News new | comments | show | ask | jobs | submit login
[flagged] Will humans write code in 2040? (arxiv.org)
42 points by dbennett 5 days ago | hide | past | web | favorite | 48 comments





I mostly get downvoted for questioning the AI hype, but I never tried it here in HN.

The paper is... how I put it mildly... not in the realm of making sense.

They mention simple question answering (which still stumbles at more complex even if structured questions), then code generators that existed since 1980s (at least) and then pretend that it's the same.

Today's AI is about approximating human judgment from datasets. The knowledge representation part, which is essential for tasks like coding, did not advance much since a decade ago.

This statement, however, takes the cake:

> Some early results from Facebook this year suggest that machines are capable of developing their own more efficient methods of communications

It's about the hype created by technically illiterate journos. (My TechCrunch piece on that: https://techcrunch.com/2017/09/06/the-secret-language-of-cha...) Could the authors have done some minimal due diligence before making this kind of claims?

Re subject, if anything, coding today requires much higher level of abstract thinking. Of course, there are tools for coding oompah loompahs too, like WordPress, but there is a lot of human decisions to be made there, too.


Yes, I wonder how much time if any people predicting our coming doom and the death of every job ever have spent working with the bleeding edge technology of AI but... it's not much better than AI has been for the past several decades.

Data sets have gotten significantly larger, which makes predictive statistics based off it better.

The professors I've talked to/studied under who work on AI certainly do not share the same concerns.


I agree with you. That said, I can see how AI could be leveraged to generate implementation of highly precise specs. Code would probably evolve to be more about defining desired behavior, very formally, and AI could possibly build the implementation of it. I suspect this would be at the function level first, or unit, like maybe a set of data and a few functions over the data. You could describe clearly the expected behavior, in a formal language, and then AI could write a highly optimized implementation of it.

Now, would this be just as tedious? Would it make coding any faster? Dunno. Maybe the coding of the spec would be almost harder, and a bad spec would still result in bugs in the code.


> I can see how AI could be leveraged to generate implementation of highly precise specs

Interesting idea - maybe as a draft.

But keep in mind that people have been chasing this target for decades. SQL was supposed to be a language so close to English that business people could use it. Wizards and templates were supposed to eliminate the need for hand-coding. 10 years ago I was advised to learn UML or prepare for extinction.

All they've achieved is the explosion in the number of programmers.


Agreed. I didn't finish the whole paper, but from what I read it sounded like the author didn't really understand what code generation was actually doing.

I think it is not about AI but about improving our work with new tools. There are zillion developers doing the same thing everyday with the same mistakes and a lot of this work can be encapsulated at different abstraction levels.

I think it's subtly different in a very important way: there are a zillion developers doing almost the same thing everyday with almost the same mistakes.

The devil is in the almost. That's where the inability to trivially design a framework to cover all those cases comes in. That's the little subtle differences in business logic, that turn into different flavors of tech debt, that keep people in a job for a long time.

Tools and abstractions will improve, but here's the flip side: have you ever worked on a project where you consistently delivered 100% of what the stakeholders asked for, on time, with no negotiation or compromise needed? I haven't, so I think the more we can deliver, the more that will be asked, for quite a while.


I agree, the level of abstractions will just change. We might end up writing something more like formal logic in a declarative language and the AI will then implement and optimise it.

Kind of the way a compiler generates optimised machine code from our higher level languages now.


Also don’t forget - we still have to write certain routines in Assembly even after so many years. Number of abstraction layers doesn’t mean complete automation.

Going from high level to low level code will not go away. But, for the most part you would probably try to get something working and then optimize it. http://wiki.c2.com/?PrematureOptimization

>I agree, the level of abstractions will just change. We might end up writing something more like formal logic in a declarative language and the AI will then implement and optimise it.

We've tried. The problem with layers of abstraction is that they're leaky, and don't always work.

For example, simple SQL is easy. Optimizing SQL requires intimate knowledge of your db engine.


Yeah, I imagine the formal logic being quite strict, maybe requiring formal verification. It wont stop business-leaky-cases, but it might give us something deterministic that is better than what we have - we dont need perfection, an order of magnitude (or 5) jump in productivity would do...

Doubtful, debugging would be so hard at that point. I think we’ve already reached peak abstraction where we are right now, and more indirect forms of programming will be increasingly non-viable.

Echoing the ideas of PLDI 2016's keynote speaker:

Consider a problem like finding potential solutions to an equation. The way that AI today would solve this problem is to give you an algorithm that tells you "yes there's a solution", or "no there isn't," maybe with a somewhat-bogus quantification of confidence. An SMT solver would instead tell you "yes, here's a satisfying assignment", "no, there's no satisfying assignment", or "I don't know." In other words, existing PL techniques can guarantee that they'll never give you the wrong answer, while all of the guarantees in AI are only probabilistic.

At the same time, though, we've been able to make--and deploy on production codebases--significant improvements in compilers. Production software exists to automatically tune BLAS or FFT routines to your particular machine. We can automatically find bugs and generate evil testcases with concolic execution and whitebox fuzz testing. We're nearing the point where a compiler could superoptimize an innermost loop and develop faster--and provably correct--code than a human writing assembly by hand (it's already possible for small loops). Compilers and related tooling are already amazing tools in their power, and we've done stuff that's in many ways as or more impressive than AI, and yet our achievements are largely ignored outside our own small community.


In each case that you have mentioned, the algorithms have been coded by humans. The machine is only following orders it has been given.

There have been some examples of machine processes that have come up with new information, but you will find that all the base work is human.


its hard to say how fast progress will continue, but I do think by that time we will need substantially less than today, maybe 10% of the work force we have now.

23 years ago it would have took a team many years to do what a single person can put out in a weekend. I dont think AI will be the single biggest driver though, mostly APIs getting more sophisticated. we (those that work in tech) have been lucky that the more efficiently we do something, the more demand out there is for our skills, and there was a ton of pent up demand for tech.

I dont know if its the situation I am in, but I already feel like demand for programmers is starting to drop. many engineering jobs is moving data around. just a new type of database administrator


> 23 years ago it would have took a team many years to do what a single person can put out in a weekend

That's an interesting aspect. I think the advances happened mostly in the areas of UI and intra-system communication. If it's about coding business logic for, say, supply chain or payroll system, the effort is roughly the same, despite bajillions invested in all the purple farts like BizTalk and new fancy functional language paradigms.


>23 years ago it would have took a team many years to do what a single person can put out in a weekend

Are you sure? How big was the original Unix team? I suspect it's smaller than Microsoft's Windows team is these days.

If anything, I think you have larger software teams now than ever before.


yes. you had to pick unix, because there is a lot that people simply do without building from scratch.

you can build an ecommerce site or something like pinterest in hours. gem install devise - and your login system is already up and running. you can get a 3d game up fairly quickly with unity. try building an interactive mapping application in 1994, seriously. try implementing a facebook like button in 1994 - you'll be like crap, no ajax, when I want to update the like count the entire page has to update. pages like that did do that, its unacceptable today.

projects back then accomplished way less, and were much less refined than the products we expect today. I also think a lot of teams are too large, there is a huge amount of bureaucracy at some companies.


I think they will and the paper is too naive when assuming the techniques can scale and that computers can do the translation from a vague list of requirements to a precise specification. This comic[1] explains the issue well I think. There is a lot of hot new research going on in program synthesis, most of the stuff I see is about either of:

- Programming by example where we give some example inputs and outputs and let the machine synthesize a program generalizing the input-output function. It works well for small, pure mathematical functions with not so many subtle edge cases but I don't think we can make it work for the edge cases as easily, at least without human intervention that will be analogous to programming the synthesizer.

- Synthesis from specification, we give a tight spec of the function we want and all the available functions then ask the computer to synthesize the program. Some researchers made some very cool examples of this work by e.g. taking specifications encoded with dependent types and generating conforming programs. The catch here is that, if the spec is not precise enough, you may end up with a non-conforming function and who is going to write such a precise spec? The programmer!

[1]: http://www.commitstrip.com/en/2016/08/25/a-very-comprehensiv...?


Turns out, machines already do write most or really all of the “code” and we use higher level more human-like languages to tell magic programs what code to generate. Those magic programs are called compilers and interpreters, and they already do a lot to protect humans from the extreme heterogeneity of hardware.

There has been research going back to the very early stages of ML to apply it to compiler heuristics. For example, applying decision trees to loop unrolling.

The idea seems a bit outlandish, but I could easily see many coding jobs vanish. We wrote a performance-oriented library (a regex matcher) and invested huge numbers of person-hours tweaking and tuning and building new, faster subsystems. It's not a giant leap of faith to imagine a 'Gold' version (correct but not fast) being written by a human and most of the tuning/tweaking/test generation/etc being automated.

I doubt that there is a simple AI approach to generate significant programs out of thin air, but I suspect a lot of the mechanical work of making programs fast/robust/small/whatever could be automated.

Sounds like a fun startup, honestly.


Sounds a lot like a compiler.

Fundamentally, I see programming as the task of describing what a computer should to. The way we create these instructions have definitely changed during the past 20 years, especially with the large open source movement and stuff like pip and npm. We have higher levels of abstraction now, the LEGO blocks we assemble are larger and more reusable.

In my experience it feels like most of the low hanging fruits, when it comes to abstractions, already have been picked. The databases, ORMs, web servers, UI-frameworks, parsers/encoders, crypto stuff etc. are all pre-built LEGO pieces I use and connect with each other. And it feels like the connection part is my main job.

I would predict that we code mostly in the same way in 2040 as we do now, with the approximately same level of abstraction, but with much better tooling. I want an AI that can continuously read my code as I write it and tell me when I've done a stupid misstake and turned the LEGO block the wrong way.

Also isn't a main problem of professional software development to understand what another human wants me to instruct the computer to do? If my project leader / customer can't manage to instruct another human on how the program should work, can we expect an AI to do better? Of course, the AI can respond quicker on Slack, and ask questions based on knowledge it has from previous projects and have sane defaults based on experience, but still. I'm sceptical.


> The databases, ORMs, web servers, UI-frameworks, parsers/encoders, crypto stuff etc. are all pre-built LEGO pieces I use and connect with each other.

Keep in mind there is far more programming than Web programming in the world. It's important to keep that perspective when ruminating on programming at large, and based on the pieces you mentioned, it sounds like you're limiting yourself to that mindset. (That's OK.)


Yeah, of course. It's what I do mostly these days. As mentioned in other threads, some people still work at much lower abstraction level, and even hand optimise stuff that is really critical and can't be left to a stupid compiler.

But isn't the trend with larger (/ "more useful") abstractions spreading even outside the sphere of web stuff? I'm thinking about stuff like game engines, ML frameworks or all the standard libraries for mobile plattforms.

I would really like to know how the life of a network card firmware developer is like, or writing drivers for graphics cards, or software in highly reliable systems like cars or something. I guess their work is quite different from mine.


There are already many layers of hardware/software that we don't have to code to use even going into the OSI layers stack. However the top most layer will always require programming and architecture because that is where innovation that matters to humans resides.

Even machine learning, big data, artificial intelligence, language processing and more all take some initial design and goals i.e. you have to train machine learning and architect what it is used for.

AI/automation/generation are currently great at reproduction but not inception/prototyping/creative aspects of development. There are areas that constantly move into automated layers though, where machine code is better. For instance, optimizations like WebAssembly are forming to solidify the application layers standards/platforms that may not need to be done by humans much longer, just as assembly wasn't used as much directly by programmers once C/C++/Objective-C came along in that wave in the 80s. When layers are standardized and automated, programmers just move up the stack to the new blue ocean of development built on top of everything else.

Similarly to AI, people have been saying visual coding will take over actual coding for a long time, same with automation/generation/AI. The automation still takes architecture, choreography, creativity and innovation to come up with something. Coding is automation once it is developed and live. Coding is part of the takeover and there will always be a top most layer that AI can't do for some time and possibly never on the creativity side for things that matter to a human or market need.

Programmers are also the first to really take advantage of AI. I am hoping for a future where small teams or individuals can build AI armies that code like they want on a massive scale. Here's to coding for your bot army that will do your bidding in the future and make a small team competitive always through the help of automation, machine learning and artificial intelligence.


Programmers have been trying to make programmers redundant for as long as our industry has existed but made very little progress in doing so, none if you don't count productivity improvements.

I'm yet to see a codeless platform that can handle anything more complicated than a sick leave form.


Feels like one of those particular/general cases. In particular, few people routinely code in assembler any more. I imagine you can do degree-level courses in "computing" which don't even go there. I know you can do courses with no VLSI or microelectronics, both of which were required subjects on many courses at one point.

So yes, in particular terms, we don't do things which machines (programming language compilers, VHDL->VLSI compilers) can do better.

But "general" case, no. we still "do" these things. They're just specialized.

So particular case, will everyone write code in imperative languages? No. But general-case, yes, people will still write programs.

"Siri, I've lost my wallet: ask Roomba to sweep the floor one time, exhaustively, and see if its stuck under a piece of furniture" is, at one remove "code"


One thing I can see happening is assisted UI design for websites. A lot of manpower today is spent going from PSDs or wireframes to HTML/CSS.

It's quite feasible to give a sketch, with some style transfer have an AI output decent HTML and even some scaffolding for your JS framework of choice.


In the future humans won't have write code because machines will do it all.

So all that will be left for humans is set up configs like webpack. This will take a tremendous amount of time and effort. So there will be a huge demand for configers. And thus configer bootcamps and Medium articles on how there is so much sexism in the configing industry and others bemoning the prevelence of config-bros.

Sarcasm aside, at the end of the day someone has to tell a machine what is wanted. Unless machines get much smarter and they aren't about to get that smart any time soon near as I can tell. Those someones are called programmers.


that machines, instead of humans, will write most of their own code by 2040

Regardless of whether it actually happens, a future where humans can no longer have meaningfully detailed control over what machines do (to them), sounds extremely dystopian and certainly much sci-fi has been based on that vision, so I hope it doesn't happen...


This already happens. Compilers and interpreters hide most of the details.

Unless you're writing embedded code you usually don't have a say in a lot of what's going on. Things like memory allocation, scheduling, interrupts, addressing.

Things like kubernetes takes this even further.


That AI would ask the client / designer "what the hell do you mean with this?" until an implementable design emerges. Client / designer decides it's much more efficient to throw a half baked idea over the fence to blame a human programmer afterwards.

If you’re on a phone, here’s an HTML version of the article: https://www.arxiv-vanity.com/papers/1712.00676/

The more I think of this, the more the only counter-argument seems to be natural language not being precise enough. But all coding tasks start with a natural language description at one point. Hmmm.

I think Humans will be writing code 40 years from now but not as they do today as natural language processing will get far enough that you could tell an AI what you require and then it will write the code for you. When a tech advances the number of its users increases but the % that understand its internals drop. Programming is no special flake to be protected from it the same will happen to it. We are already getting small hints of this with specialized algorithms that optimize code or find bugs

Citation needed. Decades ago human level AI was coming real soon now and cobol would let managers write code without needing programmers.

Are you specifically asserting that the need to write code will be obviated by human or greater AI? This seems to be a safe prediction in the long run although I cannot imagine how we could accurately establish a time line.


Yes. Technological progress is almost always slower than people imagine.

Nice counterexample to Betteridge's law.

Yes.

it's called prolog

its closer. in the end I think its too fussy. what do you think of the datalog inspired languages?

In the end, the only remaining jobs will be those that require interacting with other humans.

What current jobs don't require interaction with other humans?

Well, better phrased: that MOSTLY require the interaction with other humans.

Those who fix the PEBCAKs



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: