Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the future of the programmer?
40 points by gls2ro on June 8, 2015 | hide | past | favorite | 70 comments
To clarify more the question: How do you see the programmer in the future? Some people are talking about scientific programmer. This seems like a trend now. But is is the only perspective? For example I'm thinking what will a programmer do in a world where there are a lot of AIs and some of them are writing code already? Or what does a programmer need to know/to be in order to program AIs?

The big picture is not going to change. There are two broad categories of programmer. 1) people who build applications by gluing libraries and APIs together to get what they want. There is no shortage of things people want the machines to do, and this is by far the biggest swath of programmers and will remain so for the foreseeable future. 2) The people who do deeper dives and build high performance subsystems and components and wrap them in APIs for the first group to use. The older components will see slow replacement by newer/better versions of the same, while new areas are opening up all the time. Vision, voice, image processing, genetic analysis, communications, payment systems, medical records, you name it.

If you're the first kind of programmer, do some programming with some of the tools that interest you to learn new things and broaden your experience/understanding. If you're the 2nd kind, pick and area that interests you and roll your own. It doesn't matter if there is a big commercial code or even a big open source version out there, chances are if the right person does their homework they can build a better mouse trap. Just because something is big doesn't mean it's the best that can be, some things just have a lot of momentum.

But no, I don't really see much changing other than the list of things people build and then glue together. That basic pattern (build tools, apply tools) seems to be very persistent. Rather than worry about the future, you should open your eyes to just how wide the field is today.

From talking with hiring managers and upper-level managers & executives across many different startups and big companies, there's an important caveat to "The big picture is not going to change."

Application engineering is getting easier and easier to the point where people no longer need a degree or even extensive training to be able to do application engineering work. Employers are aware of this, startups especially, and often prefer unskilled individuals (or people with only domain expertise) who are willing to work extra hard to teach themselves the skills on the job. They often pay these people less as well. This was explicitly stated as a good strategy by a ycombinator partner, and the executives of several successful, well-known, mid-size companies.

Honestly I hope this isn't true, maybe someone else can shed some light on this.

What will change is who is doing the work, how many people are employed, and how much they're compensated for their work.

I also want to mention that a number of my friends have gotten job offers from Google, and they're much lower than the offers other friends received just a year ago for similar positions (at least $20k less).

Honestly this isn't even close to true. It's can seem that way if all you look at is startups that fail fairly early on but the vast majority of startups that grow beyond the early prototype stage end up rewriting the app using solid engineering principles or they fold under the load. That requires people who really know what they are doing. Is it easier to prototype an app now? Certainly. Is it easier to build a scalable stable app now? No, not really at all. It's not breaking new ground anymore but it certainly requires a great deal of experience and hard earned knowledge to get there.

>. Is it easier to build a scalable stable app now? No, not really at all.

What? It's vastly easier. 10 years ago, you'd often buy servers. Getting a HA DB required a bit of knowledge (especially if you weren't using SQL Server's easy cluster stuff). Backups were another hassle.

Now? Click "New Database" and select a performance and HA level. Done. Look at StackExchange - they just scaled the DB up cause RAM is cheap enough to put a TB in a server. That's vastly easier than dealing with scaling out. And even to scale out, "cloud" DBs like SQL Azure have sharding and auto-scale built-in.

On top of that, servers are MUCH faster. Look at the RPS that some sites are claiming. They simply aren't that high. 1000 RPS? 10 years ago that required some planning. Now, so long you don't do dumb things, it's not an issue. (Source: I ran a system that did hundreds of millions of transactions a day, and the hardware requirements were rather light. Each transaction ended up in an ACID DB UPDATE, in addition to archival, auditing, HTTP requests, etc.)

It's simply incorrect to say it's not easier to scale today. It is.

10 years ago expectations were lower too though.

Application engineering is getting easier and easier to the point where people no longer need a degree or even extensive training to be able to do application engineering work.

There has never been a positive correlation between the ability to build/maintain applications and having a degree or going through extensive training. The job has always and will always consist in learning things by yourself, as and when you need them. If someone needs hand-holding in order to learn a new language, a new API, a new tool, or anything else that we use, he is simply not suitable for the job. You cannot count on receiving "training", because with relatively new technology, there may initially not even be any training around. In other words, there is rather a negative correlation between seeking knowledge in degrees or extensive training, and the ability to do the job. The person may simply never have learned to figure things out by himself. This would entirely disqualify him for the job. Furthermore, application engineering is only getting easier and easier for a certain group of people. For everybody else, it is getting more and more difficult.

We've had RAD toolkits and other highly abstracted application toolkits geared towards ease of use for quite long now. HyperCard, PowerBuilder, Visual Basic... you could probably even throw in the Smalltalk environments in there to some extent.

The difference is that computer networking has become more seamless and ubiquitous, and so the process that you describe of hiring unskilled (by traditional measures) workers doing domain-specific programming work for low pay has become a more rational one to do, because of information accessibility.

The paradox these days is "people who build applications by gluing libraries and APIs" are ending up more highly paid/rewarded/recognized than the developers who create the glue and APIs they are actually using.

So, all the "two week programing camp" developers launch their 12-hours-from-idea-to-launch apps on top of the work of _actually_ skilled people, and all the reward goes to the final layer, not the underlying support.

Right now we're in an environment where, if you do build a better mouse trap, you get $1 while people using your better mouse trap to build Social Mice Death Sphere get $1 million just for adding fancy graphics on top of other people's work.

Is the only reason our entire zero-reward open source system stays alive is because nerds are economically irrational and just want to have fun while turning a blind eye to being exploited?

"Is the only reason our entire zero-reward open source system stays alive is because nerds are economically irrational and just want to have fun while turning a blind eye to being exploited?"

I would say no, because at least to me it seems like those who are just building applications by "gluing libraries and APIs" need to be compensated more because their jobs are much less interesting both in terms of the actual work and also to future potential employers.

I would definitely take a pay cut if I were to be allowed to work on something more interesting provided I still have enough to survive.

For those "nerds", why should it bother them that other people are profiting from their work? If anything I would think it should make them happy to know that what they built is helping others all the while keeping them entertained.

Actually, that's one of the things that bothered me for a few months now. I can't see where this is going, yet. And to be honest, it scares me quite a bit.

There are options.

I think it's disingenuous to think every open source software component should be a VC funded company operating off support contracts to get any reward. That's essentiality the only avenue to "greatness" right now.

In a proper world, VCs would recognize all their darling, bespoke, hand-picked companies depend on the public infrastructure of open source (and unaffiliated) developers to super accelerate the growth of their portfolio companies.

In a sane world, VCs would see it's in their best interest to create a "public benefit" pool of cash where they fund open source developers to create, maintain, and grow open source projects/platforms at full time senior-engineer salaries. Such a pool of cash (contributed to by multiple VC firms) would be administered by a neutral third party (to avoid nepotism as VCs are wont to do) and have "board-like" oversight and project prioritization (maybe each portfolio company gets X votes towards ranking which features to create next) so resources (cash/salary/infrastructure) go towards projects companies are using and needing to grow.

A different option is to encourage VC-funded companies to set aside, say, 5% to 15% of each round for hiring full time open source developers. Open source developers working at a company would be loyal to their open source projects first and foremost, but would accept direction and feature prioritization from the sponsoring company.

Just because something isn't a company doesn't mean it's not valuable and doesn't deserve compensation for services rendered, but in the current environment that's the only way forward until more fundamental change appears.

> Is the only reason our entire zero-reward open source system stays alive is because nerds are economically irrational and just want to have fun while turning a blind eye to being exploited?

1. Fun is it's own reward of course.

2. Producing open source code you get easy marketing and no risk. Try to sell the same thing requires a whole new skill-set, capital, and you will probably end up with the $0 you'd get from open-sourcing it anyway.

3. Not having a boss (or customer) makes it more fun and allows you to be super-opinionated about how you make the framework.

4. I agree though in principle nerds should be more mercenary. For non-fun stuff, get your employer to pay you to write open-source code, instead of doing it on Saturday at your own expense. For fun-stuff ... well it's fun! Try to make a $ out of it somehow.

I divide the categories a bit differently:

It seems that machines are much faster at learning how to do low level stuff(compilers, generic problem solvers, etc) - than they are at doing the high level stuff - understanding the problem and build a high level spec/system for the solution.

And of course we're becoming more capable of letting the domain users enter more and more details of the system themselves, either by visual programming, scripting or giving data to machine learning systems, etc.

So if those trends are to continue , professional programmers will specialize in creating more and more , highly abstract generic solutions.

It's not like the machines are just /learning/ how to do low level stuff. People are doing deliberate research, modifying them, and testing them to make that happen.

Low-level optimizations are the least automated process I can think of.

I'm talking about trends , not absolutes(not a sith).

The share of programmers who are doing high level stuff(understanding users and encoding that ) versus those who optimize the low-level is always growing.

Maybe it's ~90% today , vs less than 10% in the early years of computing when every byte counted and systems were less complex.

> Low-level optimizations are the least automated process I can think of.

And compilers automate low-level optimizations every day.

No they don't, humans write compiler optimizations every day, and toil over them. Compilers can only run existing optimizations.

I refer to the above types as (1) Application Developer and (2) Systems Engineer

>> I refer to the above types as (1) Application Developer and (2) Systems Engineer

I try to avoid the word "System(s)" because of confusion between what we're talking about, system administrators, system architects, and control systems folks. But if you're certain the person you're talking to uses the same definition then it's fine.

For skilled, experienced and dedicated developers - the future will be business as usual. For all the fakers, the ones who followed the cash into software, the wannabees and their one week bootcamp for ruby on fails, I have a phrase that will help your future careers: "do you want fries with that?"

I agree with the sentiment, but making a novelty account just to post flame with as little tact as this deserves every downvote it's getting.

Every 10 years or so there's a discussion on if we've reached the point when programs can program other programs, or the WYSIWYG is so good anyone off the street can do it.. It never turns out to be the case.

The problem is that software developers are largely detail managers, we work out the finer points of everything and translate it into something someone who doesn't need to think or care about the finer points can use.

For example, you could design a building yourself - but at the end of the day you could just tell an architect what to do and he/she will manage the details for you. A good one will even stop you making obvious mistakes, and save you cash by suggesting better materials etc.


"You will never find a programming language that will free you from the burden of clarifying your ideas."

An interesting vision is that we'll go from programs that can be executed to models that can be solved with generic solvers (e.g. SAT solvers [1], or CP solvers [2])

So if you need to sort an array, instead of programming quicksort, you will simply specify that you are looking for a permutation of the input array T such that for all i, T[i] < T[i+1] and the solver will search and find a solution to your problem (i.e. a sorted array) using whatever technique it likes.

Many constraint satisfaction and optimisation problems are already solved this way, there's a possibility that this generalise to other problems.

There's many advantages to this: first you need less (no?) testing, because if the solution doesn't match the problem specification it can only be a bug in the solver.

Second, there's a chance that solvers will eventually be as fast as human programmers at finding solutions. Just like the compilers are almost as good at programmers at generating assembly, and often better. If this happens programming in general will become as useless as programming in assembly today.

[1] http://en.wikipedia.org/wiki/Boolean_satisfiability_problem#...

[2] http://en.wikipedia.org/wiki/Constraint_programming

Why hasn't Prolog programmed us out of existence, then?

Don't get me wrong. Constraint solvers are useful and powerful tools for declarative problem solving, but I don't recall them ever being viewed as general-purpose, especially not to the level of being a generic "autocode" (what compilers were once called, funnily enough) system.

Genetic programming seems like a more likely candidate (with some combination of SAT solvers and statistical learning), but again hypothesizing about such a major thing doesn't appear fruitful at present.

To clarify CP and SAT solvers are just examples and obviously they're not good enough otherwise we would be using them already. Genetic programming with combinations of SAT and statistical learning is also a possibility I guess [1]. The point is that you wouldn't need to instruct the processor what to do anymore, you would just specify the output (aka declarative programming).

> Why hasn't Prolog programmed us out of existence, then?

I would say because of its (low) speed.

> Constraint solvers are useful and powerful tools for declarative problem solving, but I don't recall them ever being viewed as general-purpose, especially not to the level of being a generic "autocode"

They are not autocode, but they are autosolve-my-problem which is in fact what you really want. I don't believe automatically generating code makes any sense. Source code is a very inefficient way to represent a program (i.e. a function) that is only useful because it can easily be written/modified by humans. There's no point in using source code as an internal representation for a program.

[1] AFAIK state of the art SAT solvers already do some level of learning: http://en.wikipedia.org/wiki/Conflict-Driven_Clause_Learning

cvxpy[1] is a good example of this. The contraints are specific symbolically and cvxpy optimizes for you (assumes problem is a convex optimization problem etc)


Yes, there are many of these. I think MiniZinc is also quite popular among the optimisation guys


Reminds me of Expert Systems

We'll be writing yet another goddamn reporting system for marketing. However, the coffee will be exquisitely made by Blue Bottle's autonomous cybernetic drone.

I have written a reporting system at nearly every company I've worked for. I've talked to programmers who wrote reporting systems in the 70's and 80's, and are still writing them today. I think you're right, we will still be writing reporting systems well into the future.

This is hilarious - I just started my internship at a company and a few of the people in the department are working on....you guessed it reporting systems!

A lot of the other things that are going on are fun things though, and being an intern and working on a project - from scratch is a rewarding experience.

I believe the current trend of high level languages will continue. Your average coder already is working at a high abstraction level, with many not even having to deal with direct file access or memory management. Even the ones who do work at lower levels frequently do not get down to assembly.

When I started this career a couple decades ago, this was not the case. Most people worked in C, with the majority at least having had exposure to, if not direct professional experience at the assembly level. The abstractions we work at today existed, but were were just written off as toys and script, not "real programming".

I think this will continue - most software developers will be re-using components, piecing them together like legos. Actually writing code to make those components happen will be less common.

There will always be people who can work at all levels of the systems... but the crowd of coders just making a living will move higher and higher in terms of abstraction.

Well, I'm gonna be one of the "all levels of the systems" guys. Honestly, if suddenly my job was nothing more than piecing components together, only writing stuff in super high level languages, I'd quit. I love my job, because I actually have to use my brain to do it right.

And then I'd work as a garbage man or something, and write good old code at home. ;-)

Or you can start your own company and write good old code at work for profit.

Your average coder already is working at a high abstraction level

Scripting is not more abstract than C. You can perfectly-well script in C too. Add a garbage collector. You no longer need to call the free() function. Add a few dynamically-growing structures such as a hashtable ("map") and an arraylist ("vector"). You no longer define struct data types and instead, store everything in (JSON-style) hashtables and arraylists. Define a union of all primitive types and call that "var". Now you only use the "var" type. That's it. Except for the obvious lack of syntactic sugar, that should hide the ugly details, you are now scripting in C.

And who is it to blame ? It's extremely hard to get to that level of expertize when it comes to low level and C/ASM. And another thing is to transmute this knowledge to $$$. Yes your 1%center that gets paid 500$k in Oracle is probably considered rich and successful.

But how can you start working on low level and get at least the same amount of money than in other fields / areas in IT ?

And do not forget that it is extremely tied to either big corporations, asian HW manufacturers or other big companies (like car makers) that simply requires this type of knowledge.

There was probably a zero chance for me to dive deeper into this in my country. Yes at uni they teach you some C and algorithms but it is the top of the iceberg.

Programming isn't the hard part - the hard part is communicating with people and understanding the problems they need solved. Master that and you will have a job for however long you want.

That depends on how innovative your programming is. Let's take two examples of billionaire programmers: The Google Search guys and the Bitcoin guy. What you see is that they have never, ever asked anybody what problems they needed solved. They imagined something interesting and then they just built it. All preliminary forms of communication with other people would simply have been useless, because none of us knew that we needed Google Search until we saw it, and none of use would ever have believed that Bitcoin could work, until we saw it with our own eyes. In other words, if you are doing a truly innovative project, you should not communicate with people but simply ignore them until you are ready to show what you have built, simply because they will not be able to believe that it is possible to fly to the moon, until someone actually does it.

>because none of us knew that we needed Google Search until we saw it,

Did you use search engines before Google? It was obviously a shitty experience. And ads? And don't even start on the folks that thought they could categorize the web by hand, or use META tags to sort it all out.

So while the Google guys might not have gone around polling people "What can I build that you'd like and would make me rich?", it was far from a dream of a product no one had thought about. Tons of money was going into search engines and portals by the time Google started.

And Bitcoin? The dream of e-cash has been around for a while as well; it wasn't a brand new idea. The implementation was a breakthrough, but the idea has been around for a while. (Nitpick: And at current rates, Satoshi's coins are probably worth much less than a billion dollars. And AFAIK, there's been no movement on the blocks people thinks he has.)

Agreed, communication is key to effective development. It's also much harder to find a teacher/mentor for the communication part of the skill set than it is for the technical end.

I think the 'programmer' will slowly dissolve away, as the new generation learns how to program just like we used to learn how to read and write. My daughter is 5, she barely reads, but she finds it very easy to make small programs on sites like code.org. She can think in loops and even understands recursion.

Like spelling and maths, programming is a technique to achieve a goal in an area. A couple of centuries there were very few 'literate' people and even fewer mathematicians, maybe a couple percent of the population. As education progressed, literacy and maths became a normality and now an average 16-year old has an arsenal of tools and equations to solve very complex problems that Pythagoras or even Newton struggled with or had to invent.

The same thing will happen to programming. We're talking about maybe 10-20 years when everyone will know how to program and the tools they will have (programming languages or environments) will be so powerful that they will be able to build their domain specific solution in a very short period of time.

There will be of course people focused on languages or low level operating system stuff, but those areas are in themselves entire branches so it will be more appropriate to call them 'compiler scientist' or 'performance engineers' rather than the generic term 'programmer'.

If that were true, everybody and their little sister would understand the Bolzano–Weierstrass theorem by now. It was discovered in 1817. They teach it everywhere in high schools as a mandatory item in the math class. However, there is no hope whatsoever that more than 10% of the population would ever understand what it means.

Multiplied by the population of Earth, that's still a huge number of people.

What I was trying to say is that there are lot of problems that are already solved and new generations can build on top of that. There are always going to be ever more complicated problems and ever-expanding areas, but in many areas we've pretty much figured things out so people can just use the solutions in their domain specific problems without having to re-implement them. I'm talking about open source libraries, higher-level programming languages and even visual programming which can be used by people to write programs without needing to understand the deeper aspects of things.

Doctors visualising data, artists making interactive art or chaining visual effects, lawyers querying databases - things you needed to hire programmers for will be done by people who aren't doing this full time, but can do it if required.

It's definitely seems true. I think it would be even more powerful. remember reading about nutonian which enabled a doctor to build a machine learning model to differentiate between kids who needed surgery and who did not.

And on the other hand ,similar capability, i.e. machine learning is being used by the researchers to crack TOR mostly just by feeding data to the algorithm.

So basically ,strong tools will be available to everybody, greatly equalizing programmers and non=programmers.

How many jobs in the past hundred years required or would have been made easier if the people doing them understood the Bolzano–Weierstrass theorem? How many jobs over the next 100 years will require or be made a lot easier by knowing how to program?

I think that a software that writes good software is quite far down the line, I wouldn't direct my career planning around it. Humanity's brightest haven't even managed to make a cross-browser styling language that renders lines that are aligned with each other correctly.

We'll write AIs (heh).

On a more serious note, if program writing neural-nets develop much further I can see the field putting more of an emphasis on the theoretical aspect, and leaving implementations to the AIs -- a bit like the way mathematics is heading. Less time spent on proof-checking (and one day perhaps, construction), etc.

Consider the field from its inception and how it has changed over time. We move into higher abstractions and release ourselves from having to worry about details such as memory management. Even so, as an embedded software engineer I find myself wrestling with memory usage and throughput budgets. As AI becomes reality, it would be ideal to mold (evolve?) an AI to help me address specific weaknesses in my capabilities to allow me to focus on "the problem at hand". If you're curious about that sort of AI development as it relates to embedded software, check out "hyper-heuristics".

You can really answer your question according to the observable trends of the field. The role of the "programmer" will shift, that's all.

Regarding what a programmer needs to know, I think you'd be hard pressed to find something that wouldn't be useful in some way. Personally, I am trying to read as much of the "Old Masters" as I can. I've started with Claude Shannon and have tried to hit all the big names along the way. Bernard Widrow's work (http://www-isl.stanford.edu/people/widrow/publications.html most articles are free) got me started.

> As AI becomes reality, it would be ideal to mold (evolve?) an AI to help me address specific weaknesses in my capabilities to allow me to focus on "the problem at hand".

It's possible to do this already. Although working on application software at the moment which doesn't have any special assurance requirements, I've started making use of an interactive theorem prover to help sanity-check some complicated bits of logic. It's the future, from what I can see.

I agree with the comment citing XKCD "You will never find a programming language that will free you from the burden of clarifying your ideas."

I see programming as a way to model reality. All programs essentially created to do something with objects of the real world or model existing world in some way. And that is where the complexity comes from. As we go higher up the stack of abstractions and tools, some things get easier, but the problems and tasks remain complex just like the world behind them. Performance gains we get come from the fact that many of the modeling and decision making has been done "at the level below" so we can shift into making higher level decisions.

Just imagine a "magic AI" which can create any program for you simply based on your description. For any non-trivial program which does not previously exist you would need to specify so many details that your list of requirements essentially becomes a program (in DSL). The more "defaults" and existing pieces you can use, the easier your job will be.

So, following that logic, the programmer of the future would be using some sort of AI for sure, to take care of the parts of the system which can be reused and recombined (taking care of the common details). And that means that anybody who can describe what they want will be able "to program". But for new problems, it would need to go into deeper down the stack and domain knowledge to be able to build new things.

In terms of "what you need to know" it means having domain knowledge so you can describe what you want. And for new domains - being able to dive deep and learn them, which means the more basic stuff you know (math, statistics, algorithms, whatever), the more efficient you will be in learning and ultimately building.

we will probably continue to program the problem solvers for some time to come.

        $code = '$z = 0;' . PHP_EOL
            . '$y = 0;' . PHP_EOL
            . '$x = 0;' . PHP_EOL
            . '$x = 0;' . PHP_EOL
            . '$y = 1;' . PHP_EOL
            . 'for ($count = 0; $count < 10; $count++) {' . PHP_EOL
            . '    $z = $x + $y;' . PHP_EOL
            . '    echo $z;' . PHP_EOL
            . '    $x = $y;' . PHP_EOL
            . '    $y = $z;' . PHP_EOL
            . '}';

                'sätt x till 0 och sätt y till 1 och upprepa följande 10 gånger:'
                . ' sätt z till x + y'
                . ' och visa z'
                . ' och sätt x till y'
                . ' och sätt y till z'

You must look at this article about technological singularity.


In my opinion, the next programmers generation will do quantic computing. Crypto algorithms, protocols, softwares, we will have to rethink everything to get it working with qubits. (http://en.wikipedia.org/wiki/Qubit)

There will always be programmers. It's just a matter of whether said programmers will be electronic or biological :)

If AI ends up replacing most programming tasks, then our jobs will likely shift to something more akin to AI-centric psychology mixed with high-level software engineering. Much like how only the most hardcore "Real Programmers(TM)" today write their programs in assembly or machine code in this day and age of optimizing compilers and interpreted/JIT'd languages, only the most hardcore "Real Programmers(TM)" of the future will actually write software from scratch, with most programmers instead opting to give their AIs descriptions of what they want their programs to do like how we give compilers or interpreters source code files today.

I expect that there'll be a heavy emphasis on TDD/BDD in such a future; you'd write a specification or (inclusively) a suite of integration tests, and your artificially-intelligent program-maker will create software that meets the specifications provided and satisfies the provided tests.

Whether or not these will be "programmers" in the traditional sense is a bit fuzzy; it's conceivable that - someday - non-technical pointy-haired-boss types could come up with specifications if AI-driven software-building tools can sufficiently understand natural languages. At that point, though, it's fuzzy whether or not PHBs will exist, or whether humanity will exist (or AIs, if a Dune-Butlerian-Jihad-style anti-computer revolution manages to happen).

Programming is way to important to leave it to humans. I think in the near future we'll have programs who write programs and humans will be unneeded in this field.

> humans will be unneeded

Nice try, Skynet prototype.

> in this field :)

How far in the future are you wanting to consider? I don't think we're remotely close to even a primitive form of AI, let alone any AI that can program without human input.

Personally, I'd be shocked if this happens in the next 100 years. But it's such a "new" field, that trying to predicit 100 years into the future is a crapshoot.

> I'd be shocked if this happens in the next 100 years.

100 years ago it was 1915. How much has technologically progressed since then? Do you have a sense we have leveled-off (besides single-core frequencies)? Is technology like air travel where we will reach max capability soon after creation and just have to give up improving? Of course not.

We're still in exponential growth phases of technology. Don't discount how quickly everything can change out from under us.

> can program without human input.

Much of programming is just trying all possible endpoints then seeing what works. How much of programming today is just asking Google, finding examples, then tweaking them? Most programmer jobs these days could be completely automated away given current technology if someone built it.

Good question about how far. I was thinking more in the direction of what will be the next transformation of the skills needed in the programming field or the type of work to be done. I think I've got through two iterations so far: algorithmic and now integrative/automatic (I don't have a better name). The latter meaning we are now building more advanced solutions integrated in everyday life or advancing science. In this, the skillset and knowledge is much more advanced or larger then math and algorithms. So I'm wondering or fantasing what could be next? In the next 5-10 years.

How do you define programming? If it's specifying a program that maps some inputs with some outputs, many AI systems are already able to do this, sometime even better than humans.

For example: would you be able to write a program that can recognise people faces with 80% accuracy? Probably not but AIs can do it.

Neural nets currently still require a program to interpret them, and that program is typically written by a human.

The point is that the human doesn't write a face recognition system, but a system that learns from examples.

If humans cannot program AIs, who else is going to do it?

Humans are messy.

AI will probably dominate the everything where the process is well known and BORING. Even if better mechanisms (security, processing, protocols) take place, there will always be issues between humans and machines.

For example here is a good example, about how computers and humans make each others situations more complex.


The only thing I know about the future is that nobody knows about it. If you would force me to respond to your question, I'd say it would take a long while until AI's can predict taste, feelings, emotions, "change of minds" and so on and so on. Maybe AI's will do the websites for us, but even then I would not worry. Instead, I would do something with a real meaning and a real purpose. Like maintaining a garden. As skill which might come handy when the AI wars begin.

Until there's a better way for humans and machines to communicate, programming will continue to thrive.

When I say better way, I don't mean touch or type, I mean better connected to our thoughts.

"there are a lot of AIs and some of them are writing code already?"

That's post-Singularity, and can't be meaningfully discussed.

Since the idea of the singularity has gotten conflated with one particular possible outcome ("rapture of the nerds" is merely one of the better possibilities, not the totality of the idea), let me remind you that the core idea of the singularity is that it is when the future is so different than it is now that our ability to predict it has dropped to zero. There's a certain amount of relativity to it... we're post-singularity for a cave man, for instance. AIs that are smart enough to program themselves will result in a world so radically different that there is no meaningful way to plan for what to do from here in 2015.

In the future before then, which for all we know may well encompass the entire lives of everybody here, and assuming no major civilizational hiccups (which may not be "singularities" but are also fairly difficult to predict around), programming will be one of the few fields that probably will continue to grow more or less indefinitely for the forseeable future. More stuff is going to be programmable, and the advantage for a company to have better programmer better than its competitor will likely also grow.

In fact I think there's a decent chance that demand for programmers, and especially good programmers, is likely to rise beyond the base level of people who are even theoretically capable of becoming that level of programmer. I suspect we have not actually seen peak salary in this field for good programmers.

(Interesting tidbit... programming has the tendency to "eat" other fields, as others have noticed. Even if genetics and biology manage to do everything that they've ever promised... the top level of biological designers are going to look an awful lot like programmers, moreso than biologists, precisely because the only way we'll ever have that level of control over biology is to abstract the issues away until we have a programming language on top. Nobody can work at the raw gene level, at scale. Oh, it won't be C or Haskell or Python, but it'll still be a programming language. Probably declarative.)

I'm not just trying to say good things to make us happy; this has been my assessment of the field's prognosis for a while, even since before the bubble pop, and I've seen nothing to make me change it, even the bubble pop and the 2008 crisis. Even if we are in another bubble now and it pops, it would have to epically pop, like, the entire country, person by person, decides this whole "Internet" thing is a total waste of time forever for all things, for me to change my mind.

The fact that I feel this way about this field and that I work in this field is not a coincidence. Per my comment [1], when in 1996 I mentally did what I suggested in that post and "computer programmer" popped out in my second slot for Jobs I'd Like To Do, I was pretty grateful, and, again, have seen nothing to make me change my mind since.

[1]: https://news.ycombinator.com/item?id=9667145

"some of them are writing code already?"

I read it as, some AI are already writing code. In the present day.

Writing a bit of code is nothing interesting. I haven't seen anything that convinces me that any AIs have taken any interesting steps in the direction of replacing programmers. In a way, writing code is literally the maximally pessimal case for an AI, exposure to the full power of Turing Chaos [1], the worst possible space of possibilities for an AI to consider. Frankly, even after decades of training and practice, the smartest humans are not even all that great at it.

[1]: https://news.ycombinator.com/item?id=7111308

CRUD websites for some boring management thing.

Web forms

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact