C# 8 will adopt features that CommonLisp had in 1985, but not yet those that APL had in 1965. And C# programmers will rejoice in how revolutionary that is.
Java will adopt features that C# adopted in 2013. And the Java people will explain that it takes time to get it right, and you shouldn't rush to do anything.
Microsoft will EOL PlatinumDusk, which was the end-all-be-all replacement for GoldenDawn (which itself was the end-all-be-all replacement for SilverLight). This is accompanied by WDE (Widnows Display Elements), which replaces WZR which replaced WPF.
And it will be the year of Linux on the Desktop. (Though, desktops will only account for 10% of web users or "webable device" sales).
I think programming in 2020 (a mere 7 years hence) will look jaw-droppingly similar to what it looks like now. It doesn't even look all that different (to me) from what it looked like in the 90's (20 years ago).
7 years ago (2006), what was different? What new techs have come out in that span?
Ruby on Rails had already been out for 18 months, though it hadn't gained its current level of adoption.
Github hadn't been launched until 2008 but the first Git had.
These differences suggest that whatever new technologies will be becoming mainstream have already launched. I submit that if you traveled forward in time 7 years you would not only be able to pick up programming in the media of the time without any review but you'd be hard-pressed to find what had changed.
Fair point. Still, what programming actually looks like for those environments (black monospaced text in large windows) is little changed.
2020 will see some new devices, but we won't be using our voices, LabVIEW-style icons, or a foreign character set to program these devices. I think in a number of ways it's reassuring that so little of programming skill itself is destined for obsolescence.
Bigger screens, and possibly augmented/virtual reality will allow looking at larger parts of code. Code will be more modular. Unit testing will be built directly into languages as an intrinsic part, like compiler optimization or GC. Code will generate to a larger extent (think intelisense or code completion, but with better understandings of what you are trying to do.) Also code will self optimize to a larger extent.
touch screens, voice to text, and input methods similar to the Kinect will be used, but not the extent that people expect. Keyboards will still have a big place. Voice is too ambiguous and slow for much of programming, and touch screens and kinect style interactions require you to move too much. However, expect that touch pads will be more popular, and that they will have screens in them.
simple user facing programming languages for different interfaces will be omnipresent, and their lack of unification problematic. Many things will be programmable, and user programmable, and while unification efforts for commonality in how you program your fridge/toaster/tv will exist, they will not be enough to keep things from fragmenting. Voice interaction to program things will not progress as much as they could given the capabilities of voice recognition, simply because of a lack of well formatted connections between devices.
More documentation will be automatically generated by expert systems.
Saving power will be a bigger focus for programs, especially in enterprise. In moble Batteries will still not last as long as you want, though e-ink and better chips will help. concurrency and networking will be very important, with the rise of always on connectivity, and the limits of moore's law.
On the server side I'm hardly believe that a SOA like architecture will be largely used. I'm biased because I'm the author of an ESB like systems for working with services even simpler: https://github.com/salboaie/SwarmESB.
That would be interesting to discuss. As we can see we have many Programming Language now and more on the rise, all with a target of making scripting more human friendly and automating most of the functionalities to minimize dev time. I see more of this automation in the future where you can create complex application with just couple of clicks away (I believe we can see many of such now). What do you think?
In the web programming world, we will still be trying to shoe horn fully-dessed circa-mid-90s client-server applications into documents-with-scripts-and-sockets. It will be necessary to know at least 17 languages to develop a modern HTML6 application.
Developers who are today children in primary school will be writing 70-character denunciations of node.js, which will be to be as old-fashioned as Ruby on Rails. This link-baiting intellectual dandruff, masquerading as deep thought, will constitute 100% of the HN front page.
Companies that sprung up to simplify Amazon Web Services will themselves become sufficiently complex that a new ecosystem of simplifier-simplifiers will spring up. You will be able to write code that runs at any one of the two dozen layers over cloud providers who layer over two hundred IaaS providers without modification. That's what the sticker will say, anyhow.
Github will have even more completely supplanted sourceforge. Good riddance.
99% of programming will be done in languages which are direct, legal descendants of the languages in which 99% of programming is done today.