
Programming won’t be automated, or it already has been - ingve
https://mortoray.com/2017/03/22/programming-wont-be-automated-or-it-already-has-been/
======
michaelbuckbee
Things don't have to be fully automated to mean big changes. The factory that
makes Ikea bookcases employs twice the number of people it did in the 1980s
but they make 37x as many bookcases [1]

We're looking at a similar situation in the programming, ops, and IT support
areas.

A friend of mine Tweeted this the other night: "I'm sleepless so I just
created a voice-based health tracker using Google Assistant in about 20
minutes that took me wks to write 3 yrs ago." [2]

I've heard numerous anecdotes of "A Zapier account and 20 minutes of fiddling
with things just replaced a SAAS app/consultant, etc. we were paying for."

This is what automation looks like: the top 10% in a profession doing 100x the
work previously done. To date the appetite for more and more software
development (eating the world) has sufficed, but there is no guarantee that
will always be the case.

1 -
[http://www.bbc.com/news/business-38747485](http://www.bbc.com/news/business-38747485)

2 -
[https://twitter.com/Raelshark/status/843776684407111680](https://twitter.com/Raelshark/status/843776684407111680)

~~~
jordwest
I don't understand the constant focus on the _upcoming_ automation crunch.
Efficiency improvements (through automation and otherwise) have been happening
over the past several decades - more output with less jobs - where the
majority of the benefits go to the owners of the clockwork.

Now we're in a situation where we produce most goods that society needs (and a
lot that we don't need) yet there are fewer jobs. This is absolutely going to
get worse, but we shouldn't be thinking of robots as any different to the
market forces that have been going on for years. Something has to give.

~~~
bsder
> I don't understand the constant focus on the upcoming automation crunch.
> Efficiency improvements (through automation and otherwise) have been
> happening over the past several decades - more output with less jobs - where
> the majority of the benefits go to the owners of the clockwork.

Before, automation only impacted the jobs of blue collar workers so nobody
cared. Now it impacts the jobs of white collar workers so everybody is
starting to whine.

~~~
vacri
To be fair, I think a modern day professional carpenter would much rather work
with modern power tools than what carpenters had to use at the dawn of the
Industrial Revolution. Modern-day miners much prefer using their heavy
machinery than swinging a pickaxe and pushing a trolley. I can't see many
modern-day blue-collar workers volunteering to go back to the old ways in
order to increase employment counts.

~~~
gremlinsinc
Pretty sure the carpenter will complain when anything made w/ wood can just be
put through a 3d 'wood' printer and come out flawless, or a device that
basically can take a block of wood, or 4x4s and construct just about anything
without any human intervention. Won't be long before mining operations are all
manned by drones and not a single human. This isn't sci-fi, it's near-future
tech.

~~~
flukus
Mining is already there, bringing a mine online is human intensive, but when
that's done most of the mining is automated.

~~~
candiodari
But when AIs start to program we as programmers have a simple solution : we
simply bring them into a meeting with marketing and product management and let
the result natural suicidal impulse take care of those suckers.

------
dang
David Parnas in 1985:

 _Throughout my career in computing I have heard people claim that the
solution to the software problem is automatic programming. All that one has to
do is write the specifications for the software, and the computer will find a
program [...]

The oldest paper known to me that discusses automatic programming was written
in the 1940s by Saul Gorn when he was working at the Aberdeen Proving Ground.
This paper, entitled “Is Automatic Programming Feasible?” was classified for a
while. It answered the question positively.

At that time, programs were fed into computers on paper tapes. The programmer
worked the punch directly and actually looked at the holes in the tape. I have
seen programmers “patch” programs by literally patching the paper tape.

The automatic programming system considered by Gorn in that paper was an
assembler in today’s terminology. All that one would have to do with his
automatic programming system would be to write a code such as CLA, and the
computer would automatically punch the proper holes in the tape. In this way,
the programmer’s task would be performed automatically by the computer.

In later years the phrase was used to refer to program generation from
languages such as IT, FORTRAN, and ALGOL. In each case, the programmer entered
a specification of what he wanted, and the computer produced the program in
the language of the machine. In short, automatic programming always has been a
euphemism for programming with a higher-level language than was then available
to the programmer. Research in automatic programming is simply research in the
implementation of higher-level programming languages._

[http://web.stanford.edu/class/cs99r/readings/parnas1.pdf](http://web.stanford.edu/class/cs99r/readings/parnas1.pdf)

~~~
return0
What happens when that 'higher level language' is the everyday language?
Everyone knows it

~~~
mattkrause
We already have this--contracts are "programs" that specify how other people
should perform some action.

Nevertheless, entire industries revolve around _correctly_ writing contracts,
making sure they are executed in the intended way, and resolving ambiguities
and disputes about the original specification.

If it's ever possible to go directly from English to running code, programmers
will still be around; they'll just sound more like lawyers.

~~~
TheOtherHobbes
Contracts are not "programs", any more than programs are binding legal
agreements between users.

Lawyers are only paid to create clear specifications some of the time, and
that mostly in criminal law and political legislation.

In non-trivial corporate contract law they're paid to find favourable
ambiguities in existing specifications, and to build possible ambiguities and
hidden implications into negotiated agreements.

They're also paid for their ability to use rhetorical, verbal, and theatrical
tricks to persuade counterparties, judges, and juries of their point of view.

Outside of STEM, the relative predictability of code is a very poor and
superficial model for the relationships, transactions, and goals that define
most domains.

~~~
mattkrause
You're making my point for me.

Natural language is rife with ambiguities, contextual information, and
implicit assumptions. It is a terrible method for precisely conveying
instructions or expectations, even to equally intelligent peers. Making this
work _at all_ requires a whole specialised profession with its own elaborate
procedures and complicated argot.

If you wandered into an IBM office and said, "I need a document storage
system. Here's a blank check; make it happen!", do you think you would be
happy with the outcome? Probably not. What changes when you tell this to a
chatbot instead of a guy in a suit? Instead, you'd work with someone
lawyer/programmer who knows how to spec out the solution in enough detail that
the vendor/optimizer can't go "well, you said _STORE_ the documents, but you
never said anything about RETRIEVING them later!"

------
_audakel
>To completely remove programmers from the equation would require essentially
a human level artificial intelligence. And if I start seeing near sentient
robots walking around, my first thought is certainly not going to be, “oh no,
it’s going to take my job!”

Even with all DeepMind can do we still are very far from anything remotely
human intelligence. On the extreme low end, the tiny worm C. Elegans has only
302 neurons in its nervous system. The full circuit has been completely mapped
out for over a decade, and still no one knows how it works. Bits and pieces
are understood, but that is all.

The fruit fly has a meager 100,000 neurons in its nervous system

~~~
cr0sh
While everything you say is true (ie - I agree), does it really matter?

No - we don't know how the C.Elegans 302 neuron connectome works - but if you
slap it into a robot or simulation, it tends to act in a similar manner as the
actual biological creature (at least, that's what I understand).

We've seen similar results with biological neural networks (cell cultures and
such) hooked up to machines as well.

If the connectome of a fruit fly were somehow mapped, and then simulated on a
machine - it is very likely that it would act like a fruit fly.

Taken to the utmost extreme, the same could possibly be said for the
connectome of a human being, could it not?

Does it matter in that case, then, whether we understand how it works - versus
the fact that it is working?

~~~
notahacker
I don't think it's remotely reasonable to extrapolate making a simulation of
302 neurons wiggle a simulation of a simple muscular structure without full
understanding of the wiggle process to making a simulation of 90 billion
neurons produce outputs which resemble expressions of human cognition without
an incredible depth of knowledge of a connectome structure that shows more
variation in an individual over the course of a day than C.Elegans does within
the entire species, and how that relates to long term memory storage and a
mind-bogglingly complex array of sensory inputs and outputs.

Why do you?

It seems akin to suggesting that if I can teach my dog to respond to "sit"
despite it lacking human emotion or innate grammar, it seems only reasonable
to believe my dog can also learn to respond appropriately to the complete
works of Shakespeare. Come to think of it, I'd place more faith in our ability
to train our pets to write software than our ability to create a human brain
emulation so accurate it's like adding another developer to the team.

------
derefr
I've always understood "the automation of programming" to refer to essentially
what happens in this scene of Star Trek TNG:
[https://www.youtube.com/watch?v=lPLiaYD1lUo](https://www.youtube.com/watch?v=lPLiaYD1lUo)

In other words: constraint-based design, in an interactive context-sensitive
dialogue, with the machine-agent able to resolve change requests by—among
other strategies—discovering and consuming new third-party components it was
previously unaware of. Such a tool essentially presents the same interface
that a contract programmer presents to their client: tell me what you want,
I'll build it; tell me how it's wrong, I'll fix it.

The article brushes the concept of such a level of automation aside as
requiring near-human intelligence. But, unlike a human programmer, said tool
wouldn't _need_ do the hard part—requirements-gathering—for you, or have any
sense of what "sensible" requirements would be. Where human software
architects _guide_ their clients into specifying requirements, this sort of
tool would be much like the compilers of today: it would take you literally
and give you what you asked for, rather than what you wanted, until you ask
for exactly the right thing. It would just do it _very fast_ , such that this
could be the basis of an effective feedback loop.

~~~
cortesoft
I think that scene shows the ridiculousness of the concept; there is NO WAY
the computer could have gotten that close to the actual table they were
talking about based on those descriptions. It made all sorts of assumptions
about the table that just happened to be right in order to move the plot
along.

Just imagine trying to describe how to draw that table to a human artist. It
would take forever, with a ridiculous amount of iteration.

If you want to program like this, you would have to get VERY good at
describing the thing in an unambiguous way, and the skill to do that is going
to become the new 'programming' skill.

~~~
derefr
> Just imagine trying to describe how to draw that table to a human artist. It
> would take forever, with a ridiculous amount of iteration.

Right, that's more what I was picturing: in reality, interacting with this
kind of software would be like interacting with a sketch artist. It would be a
very exhaustive (and exhausting) process.

(Though, it'd essentially be the same effect you get by outsourcing piece-work
to a service like Mechanical Turk—just with instantaneous response-time.)

> If you want to program like this, you would have to get VERY good at
> describing the thing in an unambiguous way

Today, the intelligence/intuition/common sense of the programmer "allows" the
client to avoid ever having to develop the skill of requirements analysis for
themselves. Thus, even though it doesn't take any special aptitude to learn
this skill, it's currently rare.

On the other hand, the naivety and short iteration time of an interactive
constraint programming system would, I think, make it likely that anyone who
used it would quickly/easily develop the ability to do requirements analysis.
Perhaps not without being "lead by the hand" in a person-machine conversation,
but certainly with it. (Much like how people who have only ever used CAD
software can't necessarily do mechanical drawing by hand.)

------
thewhitetulip
I have been hearing this for a long time now, "automation will take away lot
of jobs", I don't understand the people who make such statements. Do they
realize how much manual work is done in projects in even the biggest
companies? Do they realize that the internal tool which caused AWS downtime
last month was not built in one go, it was built eventually, that's why it is
hardly perfect.

If we move to a more automated env where we have high quality softwares like
the author mentions, it'll just enable us to deliver projects faster, bring
better software in the world.

Plus, who is going to build these automation tools? surely every company is
going to have a different requirement, considering that for the last 30yrs we
haven't yet discovered how to build OTC softwares, where the main software is
a generic one which can be customized across domains, it remains to see the
"automation scare" and who'll sell robots who do everything that humans do.

Plus, I don't know why this doesn't get discussed, the day we have a general
AI which can think for itself, why would it work for us?

------
EGreg
I would like for more people to talk about automation as REDUCING the need for
as many workers, than an all-or nothing REPLACING workers.

Then it won't be a straw man of "it will never replace what I do" or "well,
people will find other jobs." Some will, some won't. It's not all or nothing,
but the AVERAGE DEMAND FOR HUMAN LABOR will go down, which means wages will
progressively become worse as a means of delivering money to the masses. You
already see it now, but few people state it in such clear terms.

We need to stop thinking in terms of "well, why doesn't everyone just buy a
ticket out of the ghetto" and realize that some will, and many won't. For
those many, we will need single payer healthcare, single payer food, rent,
movies, etc. You can call it basic income. Whatever you like. But it's
inevitable.

~~~
mdpopescu
> wages will progressively become worse

Given that history - especially recent history - has shown the opposite trend,
I wouldn't worry too much about it.

~~~
EGreg
Link backing up your claim?

Ummm
[https://www.dougsguides.com/content/whats](https://www.dougsguides.com/content/whats)

Have you even read Piketty

------
heynowletsgo
Finally some damn sense on this topic. The whole "programmers will be
automated away" is just a scare tactic to slow down the automation of what can
be automated. People scared and afraid and attacking what they perceive as the
enemy. Lots of things will be automated, automating will not be one of them.

~~~
bleachedsleet
I've often found that making broad statements about what will or won't happen
in a hypothetical future often leads one to embarrassment.

Example: "mankind will never set foot on the moon...how absurd!"

~~~
HillaryBriss
ok, yes, but cut me some slack. when i said that, i was stuck on the side of
the road next to my model T with three flats and i was reflecting on the poor
quality of leather they'd used in the tires.

------
glangdale
The idea that "automating away programmers" == "building some ridiculous tool
that allows the pointy-haired boss to program with 'plain English'" is a
strawman.

Programmers don't have to be fully automated away for automation to happen.
Suppose I have a "Really Good Compiler" for a "Pretty Nice Language" (let's
not start a language war by discussing which language is closest to this, or
which exact features are in it). This "Really Good Compiler" has a great
optimizer, the "Pretty Nice Language" has a garbage collector that's really
good for the task, etc. The PNL standard library has a huge collection of
well-tuned data structures and a generics system that actually works etc. The
PNL toolchain also has a ton of really effective static and dynamic analysis
tools to find lots of bugs that its type system didn't.

Meanwhile the folks down the hall in a hypothetical competing startup are
programming in C using pcc, like they just teleported from the 80s.

How many staff do they need to do the same job, even assuming that they are
people just as smart as we are (despite their strange choice of tools?)

We have managed to be largely blind to this because the amount of work that's
available to programmers has expanded hugely to compensate for increased
productivity - plus our tools aren't quite as nice as the hypothetical I
described.

On a related note, I found a great code sequence for something I'd been
thinking about for a while, by specifying what the sequence was meant to do,
and specifying the semantics of a few SIMD instructions, and turning Z3 loose.
Not quite ready to fire myself though.

~~~
mseebach
Quoting PG used to be all the rage around here, but it seems to have gone
somewhat out of fashion. Anyway, he basically credits the power of Lisp with
the success of Viaweb, in exactly the way you suggest:

 _If other companies didn 't want to use Lisp, so much the better. It might
give us a technological edge, and we needed all the help we could get. When we
started Viaweb, we had no experience in business. We didn't know anything
about marketing, or hiring people, or raising money, or getting customers.
Neither of us had ever even had what you would call a real job. The only thing
we were good at was writing software. We hoped that would save us. Any
advantage we could get in the software department, we would take._

[http://www.paulgraham.com/avg.html](http://www.paulgraham.com/avg.html)

------
endergen
Programming doesn't need automation to reduce a companies needs for
programmers. Cloud computing, better open source projects, and basically
anything we are excited about reduces how many programmers you need in order
to build a set of product features in comparison with say even 5 years ago, go
10 years back and it gets worse.

~~~
openasocket
Yes and no. Yes, with those things the productivity per programmer is much
higher. But the trend seems to be that increased productivity per programmer
simply increases the work demanded to create products. The easier it is to add
features to a project, the more features customers (and thus managers) will
demand.

This whole "everything will be automated and we'll all be out of work"
argument I think is similar to the mistake made by Malthus, who famously
predicted the world would become overpopulated and be unable to feed itself by
the 20th century. There are several mistakes in his work, but one of the most
prominent is not accounting for the fact that agricultural technology, and
thus yields, also increase exponentially.

~~~
cortesoft
[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

------
toss1941
I'm with the "or it already has been". Automation is always happening with
programming. Fully automatic programming has no meaning, since you have to
define parameters for some set of algorithms to act upon. Once you define
parameters, you are already using automation because today we can already with
a single line effectively "open file x for appending, and create it if it
doesn't exist" and thousands or tens of thousands of lines of code will
execute behind the scenes to make this happen.

~~~
mdpopescu
This reminds me of the time when I saw the C code required to display a window
in Windows, with all the things we've come to expect from one (close "x",
system menu and so on). At the time (Windows 95 I think) it was around 80
lines of code.

I was surprised because in Delphi 2 that was a File / New Form followed by a
Show or something like that... I never even _thought_ of the API calls
required to do all that.

------
marmshallow
Surprised nobody has mentioned the halting problem [1] yet, which logically
proves programming can never be automated.

[1] [https://i.imgur.com/HVeO7ek.png](https://i.imgur.com/HVeO7ek.png) (CH 8,
Algorithms - Sanjoy Dasgupta, Christos H. Papadimitriou, and Umesh V.
Vazirani)

~~~
josephg
The halting problem doesn't prove that at all. It just shows that _there
exist_ programs who's output can't be predicted except by running them
(potentially forever). It obviously doesn't prove that all programs have that
property. I mean, we write programs which provide useful output then halt
every day.

And if humans can write code, a sufficiently advanced AI will be able to write
code too.

------
Animats
We used to have more automated web design than we have now. Remember
Dreamweaver and Front Page?

------
danso
From my perspective, programming is the work of automating the mechanical
parts of human work and experience, whether it be something as simple as
FizzBuzz, or a web scraper, to DeepFace. Novice, and even experienced
programmers seem to forget that the appeal of a system like DeepFace or
ReCAPTCHA is the ability to apply it at a massive _automateD_ scale, not the
bespoke ability itself to detect or classify human faces/behavior. I think by
definition, programming would seem to be among the final group of human
activities to be automated.

------
partycoder
A job is role responsible for conducting productive activities. Today, some of
those activities can be automated, some others can't.

The tasks that cannot be automated usually require human level performance.

As AI advances, when it achieves human level performance in a task, that task
can then be automated. This means that fewer people is required to meet the
same productivity goals.

Some people argue that automation allows humans to focus in higher level
tasks, using bank tellers as an example. Bank tellers remain relevant even
after the deployment of automatic teller machines. I think this is just
temporary.

When every activity expected of a job is automated, the job can be fully
automated and the human becomes redundant. In this category you have: elevator
operators, telephone switchboard operators, human calculators, etc. Soon:
truck drivers, dog walkers, etc.

Automating programming is going to be a thing. Just enumerate the activities a
programmer performs and see how many of them can be automated.

My hypothesis is that our jobs will first become negotiating requirements with
a computer, and then the computers will fully take over.

Once it's done once, then it's about serializing their trained state and
deploying that same system many times. Then, it's over for humans:

A professional programmer requires 20+ years to train. Then, can only work 8
hours, gets distracted and has lots of benefits/compensation and rights,
including the right to leave you at any time. In contrast, AI will do whatever
you say with no objections, and if your project is late then you can spin up
30 more AI programmers that know exactly the same to work.

~~~
visarga
> Just enumerate the activities a programmer performs and see how many of them
> can be automated.

I decide the instructions and the robot pushes the buttons for me?

~~~
partycoder
More like requirement analysis, design, implementation, maintenance, testing,
deployment...

Those can also be broken into activities, and some of them can be automated.

------
mackan_swe
Do you all mean to say that nowhere in the near future will it be possible for
a business person to speak out loud to their computer, asking it to show him
on screen the latest sales figures regarding the new line of products he
released yesterday?

"Give me the latest sales figurs for the new XLine line."

We coudn't parse that into

"exec sales_report '2017-03-23', 'xline-123'

thus rendering the data analyst job you used to have obsolete?

EDIT: Google will easily solve the problem of speach-to-code, or someone else
will if Google's not interested once they "crack" NLP, and they will. But what
does it mean to crack NLP? Well, they already have the means to build the
perfect model. I love the word2vec idea and I think there can be innovation
still, standing on the shoulders of that discovery. What would a perfect model
mean? With a perfect model you would be able to spot concepts with unflawed
precision and be able to translate those concepts, with unflawed precision,
into whatever language you have. It's perfectly doable.

~~~
goatlover
How is that much of an advance over what we currently do? If Excel isn't
rendering data science jobs obsolete, then being able to tell the computer you
need some sales figures isn't going to either.

To get rid of data science jobs, you would need the computer to be able to
translate from business speak to generating code for all the things data
scientists actually do, which probably involves something more than Excel. For
starters, that would be cleaning and wrangling data into a format that a tool
like Excel could be meaningfully used on.

------
craigvn
Having been programming for a _long time_ , automation has been talked about
for a long time and never really happened. We have had CASE tools, RAD tools
and so on, but users demand for new features and capabilities increases at the
same rate as developer tool productivity increases meaning the developers are
still required.

------
solomatov
I think, deep learning is only one part of universal AI. What these system are
good at are visual processing, we don't have really good (comparable to what a
human can do) models for language, logical reasoning etc.

Concerning programming automation, consider modern IDEs, they automate so
much, that for me it's very hard to program without them. All this automation
is quite intelligent but not human level.

If we consider latest experimental programming languages, for example, Agda,
its environment has limited ability to complete code based on its type. If we
can make this functionality more efficient, it's theoretically possible to
find small programs which satisfy specification, though writing correct
specification isn't easy.

------
_pmf_
The focus of software engineering research should be on requirements
elicitation, analysis and transformation. There has basically been no progress
at all.

~~~
UweSchmidt
Exactly.

But this is not fun, and being the next level requirements elicitator is not a
prestigious career goal. Authoring a new javascript framework is.

~~~
_pmf_
It's completely incomprehensible to me why research pussyfoots around the
large elephant in the room. Commercial requirements tools (that are amenable
to partial transformation into inputs for data driven code generation) do
exist (DOORS, ReqIF), but there's a distinct lack of FOSS communities, which
is something I cannot understand.

The rise of the agile methodology has IMO worsened the problem by basically
making throwing in the towel the default approach. (Throw in some "but, but,
you're doing agile wrong!" here.)

~~~
UweSchmidt
People don't want to do requirement engineering for their FOSS project, of
course - that's the boring part.

To actually provide value the requirement world should offer a higher
abstraction level - say, map requirements on "business objects" that are
already implemented and configurable, and build software out of those.
Workflow descriptions, relations between entities should result in actual
code.

Otherwise, you know why doing agile sounds attractive to many developers.

Otherwise

------
brilliantcode
I feel like at the rate we are going we will eventually reach the Visual Basic
days where a monolithic IDE will lower the barrier for front-end development
so much that anyone could tinker with it and produce stellar applications with
little to no programming knowledge.

The only difference from Visual Basic days is that now we have ML and AI to
further simplify and do the thinking for us.

Designers will be hit the hardest. Followed by developers.

~~~
petra
We are getting close to that futureat least for business apps) with the low-
code trend in the Enterprise.

~~~
brilliantcode
any examples?

------
threatofrain
The ultimate programming interface is a declarative one, where an authority
tells an AI to build a program for them.

One might say that abstractly, from the business perspective, the programmer
is an expensive AI licensed for use by government. The executor or authority
declares what they want, and the machine attempts to build a program with what
specifications it was given.

------
bcheung
I assume they mean in the general artificial intelligence sense?

Most people don't really think of it in these terms, but aren't functions and
macros just automation for programming? You define it once and then it handles
it from there. You "taught" it how to handle something.

------
markbnj
>> Moreso, if the goal is to produce reusable applications, we need this to be
even more abstract: “Create a plugin that downloads bank statements, compares
to the expense statements, and produces a standard report”.

But if the computer could do that, who would need a plugin?

------
victor9000
If programming will be automated then who writes and maintains the automation
software?

------
samblr
How many 'DO NOT' believe that programming will be brain interfaced in distant
(25 yrs) future ? where 'program-thinking' brains 'can' result in new
'products' ?

edit: 25yrs

------
BrailleHunting
Sorry, wrong. Deep learning can and will take over everything, given enough
computing power and profit motive.

[http://www.ted.com/talks/jeremy_howard_the_wonderful_and_ter...](http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn)

Eventually (30-150 years), there will be AI pottery artists selling kitsch,
easily-offended toasters and corporations without wetware upper management.

~~~
goatlover
Marvin Minsky has pointed out that modern AI methods lack common sense, which
is one of the requirements for general purpose AI. Rodney Brooks agreed with
him.

------
LeanderK
Aren't compilers and runtimes automating programming (well...parts of it)?

------
mpg33
Programming seems like a thing that would be ripe for automation...

------
agreedasdfs4
These discussions usually conflate "programming, a viable career option" with
"programming, an intellectual human endeavor".

The viable career option will, IMO, unquestionably vanish for the majority.

Manufacturing and working at McDonalds were once seen as a viable career path.
They've been streamlined and automated such that it requires only a fraction
of the human labor it did a generation ago. Google and the like will make this
happen for computing as well.

We'll only need so many Evernote like apps. I'd wager we'll have reduced GUI
interaction in general, such that a lot of JS/HTML/CSS, and similar coding
work will be gone.

We're abstracting things into higher level languages. TODAY I can, with my
voice alone, tell my phone to perform a truly fascinating amount of work
compared to a decade ago.

When programming as a career is reduced to filling out text files in something
like YAML and having ML go to work, only the bleeding edge, the mathiest of
the mathheads, will have skills to command a living wage.

------
ilaksh
I agree with most of that, but I think based on current trends we should
anticipate that human-like ("human level") artificial general intelligence
will be created within less than 10 years. And not too long after that
happens, we must assume that there will be AIs that are tuned or turbo-charged
so that they can perform better than humans in specific areas like
programming. And not too long after that, there will be competition between
multiple such AIs being sold to replace humans. And then prices will be
reduced and you will be able to buy 10 of these 50X AI programmers for the
cost of one human programmer. Then no one will have human programmer employees
except for the novelty of it.

~~~
paol
> I think based on current trends we should anticipate that human-like ("human
> level") artificial general intelligence will be created within less than 10
> years

Why? People have been saying this since the 1960s, so much so it's become a
running joke. And some of them were very smart people too, like Marvin Minsky.

What is it about AI that creates this illusion of progress (or at least, wild
overestimation of progress)?

~~~
ilaksh
"Wild overestimation of progress" has been part of the story for many
technologies.

Minsky was very optimistic, and then he got burned by the AI winter, and
became very pessimistic about AI.

There is quite a lot of very serious investment aimed at this type of 'real'
AI now. For example Deep Mind, DARPA L2M, recent $100 billion Vision Fund that
Softbank's founder said was intended to bring about the singularity.

I believe it because I can basically see how to handle most of the challenges
using embodiment or virtual embodiment and interactive learning/L2M etc., by
combining or taking inspiration from various techniques that have been
published. Stuff like this:

Overcoming catastrophic forgetting in neural networks (adding plasticity to NN
models)
[http://www.pnas.org/content/early/2017/03/13/1611835114.full](http://www.pnas.org/content/early/2017/03/13/1611835114.full)

"Decoupled Neural Interfaces using Synthetic Gradients"
[https://arxiv.org/pdf/1608.05343.pdf](https://arxiv.org/pdf/1608.05343.pdf),
somewhat similar to autoencoder/decoder pair use in

"Feynman Machine: The Universal Dynamical Systems Computer"
[https://arxiv.org/pdf/1609.03971.pdf](https://arxiv.org/pdf/1609.03971.pdf)

"Safe Baby AGI" [http://agi-conf.org/2015/wp-
content/uploads/2015/07/agi15_bi...](http://agi-conf.org/2015/wp-
content/uploads/2015/07/agi15_bieger.pdf)

(cognitive architecture) "OpenPsi: Realizing D¨orner’s ”Psi” Cognitive Model
in the OpenCog Integrative AGI Architecture"
[http://goertzel.org/OpenPsi_agi_11.pdf](http://goertzel.org/OpenPsi_agi_11.pdf)

