The paper is... how I put it mildly... not in the realm of making sense.
They mention simple question answering (which still stumbles at more complex even if structured questions), then code generators that existed since 1980s (at least) and then pretend that it's the same.
Today's AI is about approximating human judgment from datasets. The knowledge representation part, which is essential for tasks like coding, did not advance much since a decade ago.
This statement, however, takes the cake:
> Some early results from Facebook this year suggest that machines are capable of developing their own more efficient methods of communications
It's about the hype created by technically illiterate journos. (My TechCrunch piece on that: https://techcrunch.com/2017/09/06/the-secret-language-of-cha...) Could the authors have done some minimal due diligence before making this kind of claims?
Re subject, if anything, coding today requires much higher level of abstract thinking. Of course, there are tools for coding oompah loompahs too, like WordPress, but there is a lot of human decisions to be made there, too.
Data sets have gotten significantly larger, which makes predictive statistics based off it better.
The professors I've talked to/studied under who work on AI certainly do not share the same concerns.
Now, would this be just as tedious? Would it make coding any faster? Dunno. Maybe the coding of the spec would be almost harder, and a bad spec would still result in bugs in the code.
Interesting idea - maybe as a draft.
But keep in mind that people have been chasing this target for decades. SQL was supposed to be a language so close to English that business people could use it. Wizards and templates were supposed to eliminate the need for hand-coding. 10 years ago I was advised to learn UML or prepare for extinction.
All they've achieved is the explosion in the number of programmers.
The devil is in the almost. That's where the inability to trivially design a framework to cover all those cases comes in. That's the little subtle differences in business logic, that turn into different flavors of tech debt, that keep people in a job for a long time.
Tools and abstractions will improve, but here's the flip side: have you ever worked on a project where you consistently delivered 100% of what the stakeholders asked for, on time, with no negotiation or compromise needed? I haven't, so I think the more we can deliver, the more that will be asked, for quite a while.
Kind of the way a compiler generates optimised machine code from our higher level languages now.
We've tried. The problem with layers of abstraction is that they're leaky, and don't always work.
For example, simple SQL is easy. Optimizing SQL requires intimate knowledge of your db engine.
Consider a problem like finding potential solutions to an equation. The way that AI today would solve this problem is to give you an algorithm that tells you "yes there's a solution", or "no there isn't," maybe with a somewhat-bogus quantification of confidence. An SMT solver would instead tell you "yes, here's a satisfying assignment", "no, there's no satisfying assignment", or "I don't know." In other words, existing PL techniques can guarantee that they'll never give you the wrong answer, while all of the guarantees in AI are only probabilistic.
At the same time, though, we've been able to make--and deploy on production codebases--significant improvements in compilers. Production software exists to automatically tune BLAS or FFT routines to your particular machine. We can automatically find bugs and generate evil testcases with concolic execution and whitebox fuzz testing. We're nearing the point where a compiler could superoptimize an innermost loop and develop faster--and provably correct--code than a human writing assembly by hand (it's already possible for small loops). Compilers and related tooling are already amazing tools in their power, and we've done stuff that's in many ways as or more impressive than AI, and yet our achievements are largely ignored outside our own small community.
There have been some examples of machine processes that have come up with new information, but you will find that all the base work is human.
23 years ago it would have took a team many years to do what a single person can put out in a weekend. I dont think AI will be the single biggest driver though, mostly APIs getting more sophisticated. we (those that work in tech) have been lucky that the more efficiently we do something, the more demand out there is for our skills, and there was a ton of pent up demand for tech.
I dont know if its the situation I am in, but I already feel like demand for programmers is starting to drop. many engineering jobs is moving data around. just a new type of database administrator
That's an interesting aspect. I think the advances happened mostly in the areas of UI and intra-system communication. If it's about coding business logic for, say, supply chain or payroll system, the effort is roughly the same, despite bajillions invested in all the purple farts like BizTalk and new fancy functional language paradigms.
Are you sure? How big was the original Unix team? I suspect it's smaller than Microsoft's Windows team is these days.
If anything, I think you have larger software teams now than ever before.
you can build an ecommerce site or something like pinterest in hours. gem install devise - and your login system is already up and running. you can get a 3d game up fairly quickly with unity. try building an interactive mapping application in 1994, seriously. try implementing a facebook like button in 1994 - you'll be like crap, no ajax, when I want to update the like count the entire page has to update. pages like that did do that, its unacceptable today.
projects back then accomplished way less, and were much less refined than the products we expect today. I also think a lot of teams are too large, there is a huge amount of bureaucracy at some companies.
- Programming by example where we give some example inputs and outputs and let the machine synthesize a program generalizing the input-output function. It works well for small, pure mathematical functions with not so many subtle edge cases but I don't think we can make it work for the edge cases as easily, at least without human intervention that will be analogous to programming the synthesizer.
- Synthesis from specification, we give a tight spec of the function we want and all the available functions then ask the computer to synthesize the program. Some researchers made some very cool examples of this work by e.g. taking specifications encoded with dependent types and generating conforming programs. The catch here is that, if the spec is not precise enough, you may end up with a non-conforming function and who is going to write such a precise spec? The programmer!
I doubt that there is a simple AI approach to generate significant programs out of thin air, but I suspect a lot of the mechanical work of making programs fast/robust/small/whatever could be automated.
Sounds like a fun startup, honestly.
In my experience it feels like most of the low hanging fruits, when it comes to abstractions, already have been picked. The databases, ORMs, web servers, UI-frameworks, parsers/encoders, crypto stuff etc. are all pre-built LEGO pieces I use and connect with each other. And it feels like the connection part is my main job.
I would predict that we code mostly in the same way in 2040 as we do now, with the approximately same level of abstraction, but with much better tooling. I want an AI that can continuously read my code as I write it and tell me when I've done a stupid misstake and turned the LEGO block the wrong way.
Also isn't a main problem of professional software development to understand what another human wants me to instruct the computer to do? If my project leader / customer can't manage to instruct another human on how the program should work, can we expect an AI to do better? Of course, the AI can respond quicker on Slack, and ask questions based on knowledge it has from previous projects and have sane defaults based on experience, but still. I'm sceptical.
Keep in mind there is far more programming than Web programming in the world. It's important to keep that perspective when ruminating on programming at large, and based on the pieces you mentioned, it sounds like you're limiting yourself to that mindset. (That's OK.)
But isn't the trend with larger (/ "more useful") abstractions spreading even outside the sphere of web stuff? I'm thinking about stuff like game engines, ML frameworks or all the standard libraries for mobile plattforms.
I would really like to know how the life of a network card firmware developer is like, or writing drivers for graphics cards, or software in highly reliable systems like cars or something. I guess their work is quite different from mine.
Even machine learning, big data, artificial intelligence, language processing and more all take some initial design and goals i.e. you have to train machine learning and architect what it is used for.
AI/automation/generation are currently great at reproduction but not inception/prototyping/creative aspects of development. There are areas that constantly move into automated layers though, where machine code is better. For instance, optimizations like WebAssembly are forming to solidify the application layers standards/platforms that may not need to be done by humans much longer, just as assembly wasn't used as much directly by programmers once C/C++/Objective-C came along in that wave in the 80s. When layers are standardized and automated, programmers just move up the stack to the new blue ocean of development built on top of everything else.
Similarly to AI, people have been saying visual coding will take over actual coding for a long time, same with automation/generation/AI. The automation still takes architecture, choreography, creativity and innovation to come up with something. Coding is automation once it is developed and live. Coding is part of the takeover and there will always be a top most layer that AI can't do for some time and possibly never on the creativity side for things that matter to a human or market need.
Programmers are also the first to really take advantage of AI. I am hoping for a future where small teams or individuals can build AI armies that code like they want on a massive scale. Here's to coding for your bot army that will do your bidding in the future and make a small team competitive always through the help of automation, machine learning and artificial intelligence.
I'm yet to see a codeless platform that can handle anything more complicated than a sick leave form.
So yes, in particular terms, we don't do things which machines (programming language compilers, VHDL->VLSI compilers) can do better.
But "general" case, no. we still "do" these things. They're just specialized.
So particular case, will everyone write code in imperative languages? No. But general-case, yes, people will still write programs.
"Siri, I've lost my wallet: ask Roomba to sweep the floor one time, exhaustively, and see if its stuck under a piece of furniture" is, at one remove "code"
It's quite feasible to give a sketch, with some style transfer have an AI output decent HTML and even some scaffolding for your JS framework of choice.
So all that will be left for humans is set up configs like webpack. This will take a tremendous amount of time and effort. So there will be a huge demand for configers. And thus configer bootcamps and Medium articles on how there is so much sexism in the configing industry and others bemoning the prevelence of config-bros.
Sarcasm aside, at the end of the day someone has to tell a machine what is wanted. Unless machines get much smarter and they aren't about to get that smart any time soon near as I can tell. Those someones are called programmers.
Regardless of whether it actually happens, a future where humans can no longer have meaningfully detailed control over what machines do (to them), sounds extremely dystopian and certainly much sci-fi has been based on that vision, so I hope it doesn't happen...
Unless you're writing embedded code you usually don't have a say in a lot of what's going on. Things like memory allocation, scheduling, interrupts, addressing.
Things like kubernetes takes this even further.
Are you specifically asserting that the need to write code will be obviated by human or greater AI? This seems to be a safe prediction in the long run although I cannot imagine how we could accurately establish a time line.