Hacker News new | comments | ask | show | jobs | submit login
Ask HN: What big programming productivity gains you see happening in 5-10 yrs?
100 points by pritambarhate 10 days ago | hide | past | web | favorite | 101 comments
If we look at the current programming languages & frameworks, there is a trend away from dynamically typed languages and monolithic high productivity frameworks like Rails. Considering new famous languages, Go, Rust, Kotlin, Swift, Nim & Crystal, main focus seems to be on static typing & AOT compilation to improve performance. There is a definite push towards adopting functional paradigm mixed with OO, which seems to be a push towards improving accuracy and parallelism. However, there is little groundbreaking in order to increase programmer productivity by a huge margin. (Not discounting the memory safety and other improvements, but they seem incremental.)

On the other side, there is a definite shift towards "backendless" applications, where frontend talks directly to the DB, either via autogenerated ReST or GraphQL APIs or via proprietary tools like Cloud Firestore. This is good for MVPs.

Frameworks like React & Angular give you decent building blocks to create SPAs, but the learning curve & effort required to create ambitious applications is very high.

There is a proliferation of "Low Code" frameworks which allow you to design drag & drop UIs & connect them with APIs. But, it feels like going back to "RAD" tools for the late 90s which led to a lot of spaghetti code. Also, some of these are prohibitively expensive.

One noteworthy change is UI design products which are trying to produce the code from UI Mockups. Relatively primitive at this stage. A breakthrough here can definitely be a big productivity booster.

Another productivity booster is Cross-Platform frameworks. It's already here, tried & tested. We know the trade-offs well. It requires a huge amount of effort to create & maintain a cross-platform framework. Big companies are already tackling this.

Considering this background, what do you think will be a breakthrough innovation in programming which will enable programmers to deliver ambitious web/mobile apps at a very high speed.






As I see it, the number one hurdle in software productivity is human understanding. The more understandable software is and the easier it is to make software understandable, the faster and more accurately it can be modified.

This is why functional models like React, DSL models like SQL, and constraint models are so widely used — they make what's happening in the software more understandable (at a worthwhile performance cost).

A few of my own contributions:

* Legible Mathematics: http://glench.com/LegibleMathematics/

* Flowsheets – showing data as you program: https://www.youtube.com/watch?v=y1Ca5czOY7Q

* REPLugger – showing data as you program in large systems: https://www.youtube.com/watch?v=F8p5bj01UWk

* FuzzySet interactive code documentation: http://glench.github.io/fuzzyset.js/ui/


"human understanding" can wrap it self around very complex things. What I see is an endless repetition of: Lets rip out English and use Dutch from now on! Some time later: Laten we stoppen met Nederlands en op Chinees overstappen. Again, some time later: 让我们放弃中文并使用日语!

With the excuse being increased productivity because Japanese is easier to understand.

Assuming this would even be true it is only true if you put in the work.

Think of how many people use to know js and css. If they didn't pay close attention for say 10 years they are out. The effort put into progressing things failed to make things easy to use for them.

Therefore I think the future will be more about creating self explanatory tools.

All of the developer tools part of the OS smoothly working together without blaming the user for failing to live up to configuration challenges.

Make it so that we can put a newbie or a child in front of a highly complex repository and have him figure out how everything works without help.


The cycle repeats over and over. Monolith becomes unmanageable and get chopped into loosely coupled, functionally independent replaceables: mainframes to containers, application servers to micro services, etc. Drag and drop coding hits the wall and gets overrun by old languages: 4GL to C++/Java, HyperCard to Objective C, etc. The monoliths are currently numerous in the single page application realm, choose your framework. Eventually these will succumb to smaller more flexible and purposeful components.

Yes. Endless story. What I see in many of my clients projects (I'm a freelancer, 23+ years in IT) is a return to some kind of "monolith" applications.

They surfed on the "everything must be Google-scalable..." trends. They built hundreds of interconnected micro/nano-services, upgraded networking infrastructure ($$), OAuth federation, ...

What they've got? Network latency/congestion/..., security and API versionning management nightmare, unresponsive applications (tooooo manyyyy APIs calls), performance issues, circuit-breaker, caching, ... Ok you see the big picture.

Now, the returned back to a more "monolith" design. They merged many services together in one application (services that wasn't called by anything else). Everything doesn't need to be exposed as a service. This simplified security management, reduce a lot the network usage and simplified bugs investigation/fix.

The difference is that now, their new monolith designs are more modularized than before. Everything is now in a clear module (call it jar file, lib, package, ...) with a clear contract/interface. This is what I like to call "intra-services" : clear services/features separation minus the networking/security moving parts (and all band-aid solutions to support it like circuit-breakers, distributed caches, ...).

If one day you are the size of Google, Netflix, Facebook, Spotify, ... you will probably have the money/manpower to implement it the "Google way". Until then, KISS!

Note: English is not my first language.


Your English is very good considering.

Scattered smaller components aren't really any more manageable. Its basically the same but with a load of network calls in the middle.

> Another productivity booster is Cross-Platform frameworks.

I would be so, so happy to see a UI framework that runs (well) on Windows, Mac, Linux, Android, and iOS. My druthers would be for it to have bondings to a JVM language (Kotlin is my happy place these days), but I would be okay writing in any similarly productive language.

Flutter is trying to get there, but it's not a viable solution on the desktop yet, and somehow doesn't even have a decent HTML widget. I do like Dart, however; it's similar enough to Kotlin/Swift that I'm comfortable writing it, even if I have to look up exact syntax now and then.

Electron wants to solve this problem, but we all know the arguments about its bloated runtime and egg-cooking power usage.

I suppose Progressive Web Apps and WebAssembly have the potential to solve this, too.


(Serious question) Isn't this what Java UI frameworks (swing/javafx/etc) are supposed to address? It's been a while since I've been in the JVM universe so I'm genuinely curious.

Also, what about Xamarin/C#? Going from Kotlin to C# should be pretty smooth and Xamarin is supposed to be pretty usable.


Yes, and I've done a lot of GUI work in Swing. The main issue is that Swing doesn't run on Android and iOS.

Qt mostly does this, if you count modern C++ as a productive language.

Why not QT? I'm pretty sure it works well on most if not all platforms and has bindings for every language.

I'm on the PWA + WebAssembly train now as a solution for this. I really think we're finally getting close to a world where desktop-apps become truly pointless with Wasm + PWAs, and the WebGPU spec.

This. WebAssembly is in a unique position to be tackling this problem since modern web browsers already have sandboxed OS level access to cameras, microphones, desktop, bluetooth and hopefully GPUs.

The future of running Autodesk, Adobe software and AAA grade games on the web is coming nearer with WASM.


People have been banging on about cross platform frameworks for years now. They are all too much of a compromise. It probably won't get any better than what we have now.

The Flutter dev team are currently conducting a survey and a couple questions were regarding desktop mode so I reckon Hummingbird is coming pretty soon.


What about Tornado FX?

I use Kotlin mutli platform. Share the scss between react(KotlinJS) and Desktop(tornado).


Not a lot of comments so far, but a few people are talking about stuff that is in the general vein of better tooling and code generation/automation through deeper code understanding by our tooling.

I think that this sentiment is interesting, because the way I see it, this can only happen if we continue the trend of transitioning to static typing.

We can't really do much more for dynamic language tooling today than we are already doing unless we solve the halting problem. The very nature of the problem (of providing smarter tooling) requires us to be able to gather metadata about our code, without having to execute it. This means static information.

We have invented all sorts of schemes circumvent this problem, like the plethora of documentation generation tools that exist. Of course, there is more to documentation than just information about the types of the arguments of functions and such, so documentation tools will always have their place. But I contend that a large part of the benefit a lot of the documentation provides for many languages is simply typing information. And I also believe that with well engineered type systems and API typings, the types themselves can provide enough information to use a lot of APIs without having to reach for external documentation.

Or to paraphrase: in any code documentation generation tool for a dynamic language, there is a half-finished implementation of a badly formalized, insufficiently powerful static type system.

Typescript's huge rise in popularity seems to suggest that a lot of people used mostly to dynamic languages really wanted static typing all this time but just didn't know.

I think if we could combine some of the concepts of typescript with Haskell's type system, so that we can have both nominal and structural typing, we would have power to challenge the gods. If we are able to harness all the information in such a type system, we could provide very powerful tooling that will basically be able to write code for us, as long as the types are right (automatically filling typed holes).

I am obviously a proponent of static typing, but I'd be interested in hearing if there's someone on the other side of the fence that has some ideas for how we can improve upon dynamic language productivity without solving the halting problem.


Static typing is in my opinion the lazy language designer's way out. Yes, it allows for some introspection but only because the programmer has to type in all this type information! In 2019, we can do so much better.

For example, development consists of typing some code, running it, typing some more, running it again and so on. The run time likely already guesses (correctly) the types of most variables for optimization purposes. But then this information is thrown away for no good reason. It can trivially be reused to provide context-sensitive auto-completion and other features.

In general, why is editing, compilation and running still treated as three separate steps? Clearly, a lot of information could flow from each step to the other. It's not like a programmer compartmentalizes the job in that way. To him or her its one task which is to write software.

And why on earth do we tolerate editors that doesn't know what a "size" is? Every half-decent programmer knows "size" is a positive integer. Just as we know that "filename" is a string, "temperature" a real and "is_complete" a boolean value. But the editor doesn't. When I write mail_recepient for the umpteenth time it doesn't even try to help me! There is so much low-hanging fruit here just waiting for someone to make better.


You should google "hindley milner"

And refinement types

As to your last point those variables could be anything, even in the simple examples you list. Size could be a tuple with the shape of some multidimensional thing, it could the the string "Large" temperature could also reasonably be a string "45C".

Yes, it "could" be anything. But in practice it rarely is anything other than one easily guessed thing. Which is why Google's V8 work so well. Variable names is one signal that can be used to enhance the accuracy of these guesses.

> breakthrough innovation in programming which will enable programmers to deliver ambitious web/mobile apps at a very high speed.

That's a pretty narrow framing. 10 years from now, a lot of programming won't look like traditional application development at all. For example, a self-driving car or other machine with a similar level of automation is a valuable product that requires a lot of programming but doesn't fit into the category of "web/moble app." Or consider the development of natural language interfaces: right now this is at the level of Alexa skills, but over the next 10 years we will see frameworks that allow programmers to specify not just a few phrases, but mini-languages for entire domains: pilots talking to airplanes, doctors talking to EMRs, etc.

Use cases like this suggest that frameworks similar to PyTorch/Tensorflow will become increasingly important. These frameworks have features (particularly automatic differentiation, very sophisticated yet easy-to-use SGD optimizers like Adam, and built-in GPU support), which hugely increase productivity and lower the barrier to entry compared to the tools of 10 years ago yet they still have tons of rough edges. Its easy to imagine that 10 years from now they'll be even more powerful and easier to use; for example I'm pretty excited about the future possibilities of tools like [AutoGraph][AG].

[AG]: https://medium.com/tensorflow/autograph-converts-python-into...


40 hour max work weeks enforced and 4 day work weeks. Shifts for people that need to monitor ops.

Offices for everyone on staff that like to work in a quiet place and pubic areas for people who like to work in those places .

Also, the minimization of time spent in meetings .


> Also, the minimization of time spent in meetings

I would argue that this should be "the minimization of unstructured/unguided/unclear meetings".

Tbh I've seen a non-trivial number of instances where engineers have wasted their own time or someone else's by not participating in organizational meetings - either by doing redundant work or having an unclear image of priorities/blockers/help needed.

Nobody want's to waste time in meetings, and most people who call meetings can't run them well. BUT having a well planned and executed meeting (30 mins tops) can save hours if not tens of hours of work time per week.

I also believe that the "just let the devs do dev work and avoid meetings" mantra is border-line meme status and (in my opinion) stems from toxic anti-social behavior, rather than efficiency. However, I also believe that meeting time per week should have an upper-bound (something like 4-6 hours, including a 15-min daily standup) to better enforce efficiency, rather than throwing 1-hour meetings all over the place and hoping they don't use all the time.


Was funny seeing the BBC article on "Office Space" yesterday talk about the switch to open offices without mentioning the backlash. Was sure HN would be up in arms.

Some other reinvented wheel that makes one thing 10x easier but takes five years to build up an infrastructure from scratch for everything else that real programs need (an infrastructure that was already present in whatever the new thing replaced). Let a bit more time pass to forget a few other things, rinse and repeat. In other words, no net productivity gains, just churn.

Came here to say this, but I'm not quite that cynical. No revolutionary change, but there will be incremental improvement. (On the other hand, we're going to keep tackling harder things, so the net gain in the rate programs are produced will be zero or negative.)

I also like the point that the "revolution" is in one aspect, not the whole thing, and it requires changing other aspects, and is therefore much less of a net win than it appears...


The pendulum is now swinging back to types. Because it's a pendulum, in 5-10 years the next hot productivity thing is going to be dynamic, type-free languages.

I still remember when people were excited by the productivity gains from Ruby. "No time wasted wrestling with casts and typing".


Rather than a pendulum, I think the reality is that typed languages are now going through a significant period of evolution, especially with the advent of _gradual_ typing. TypeScript, for example, is just a superset of JavaScript, but with a compiler that's smart enough to infer many of the underlying types from context. The notion of rigorous, strong typing has always been marred by the concept of casting, but this is practically nonexistent in a gradually typed languages. Do you get the same degree of rigor? Perhaps not, but you do benefit from some of the guarantees made by the compiler.

The underlying trend here is toward more pragmatic language design, and I think this is definitely a trend that will continue in the near future (5-10 years).


Agreed. It seems typed languages are adding a lot of features that made dynamic languages interesting. You can see the same in databases where SQL engines are adding features from NoSQL.

Overall I feel there is much more language development going on now compared to may 2000-2010. I am not a big fan of JavaScript but I feel its rise has put pressure on other languages to keep up.


The gains from every developer having really good go-to-definition and find-references for all languages and code bases would be immense. The Language Server Protocol (LSP) from Microsoft has standardized a way for languages to provide these features, and more and more editor and language implementations are being built.


WebAssembly as the prevailing cross-platform VM and used as the main distribution mechanism for most applications.

We're gonna need a lot of quality abstractions between now and then and pray we don't end in feature-flag fragmentation hell where those applications end up with specific, non-cross-platform requirements.

I'm picturing a situation like the old Java app that handles remote console on some HP ILOs where you have to have a certain version of Java installed using a certain browser with a whole bunch of security exceptions manually entered. I know it's being planned specifically to avoid these situations, but that was the goal of Java, too!

This would really be fascinating to witness. I hadn't considered that anything would knock the JVM off of it's throne, but that has potential to happen.

I don't think programming productivity gains are coming.

I see webassembly terribly fragmenting the frontend skill distribution, not in a good way.

I see the next generation of programmers more populated by tech debt creators and less accepting of mentorship.

I see non-volatile storage gains dramatically changing how OSs think about RAM and storage, even to the point that we might see RAM mostly vanish (unless you need volatility, like crypto keys), and machine shutdown/startup becoming instant. As non-volatile storage continues to approach RAM latency, we'll need a new generation of operating systems and probably programming languages too.

I see people continuing to overpay for PaaS systems that can't scale, creating software that's ever more difficult to change and tightly coupled to proprietary cloud systems.


> I see the next generation of programmers more populated by tech debt creators and less accepting of mentorship

Yep - even as a youngin' I'm seeing this happen rapidly with tech being the new "hotness" that everyone wants to jump on career-wise. With 6mo bootcamps, Udemy, etc people think you can just jump RIGHT into the industry. And they're right - with the wrong interview panel it's easy for a near zero-experience person to get in the door because so many companies are focused on increasing engineer headcount vs focusing on the quality of their teams.

Honestly I've made a _lucrative_ career out of dealing with tech debt that other engineers/teams have left behind due to bludgeoning their way through the process with tools they've only used once.


TypeScript revolutionized the act of writing JavaScript

React and JSX revolutionized the act of writing HTML

GraphQL revolutionized the act of connecting front end and back end

But for CSS... SASS, SCSS, LESS, and such are by comparison pretty pedestrian tweaks on vanilla CSS


I think React is an interesting thing to think about here. There was a bunch of libraries that got ever-more-fancy about doing DOM manipulation. Those libraries helped manage the complexity of the model of doing in-place DOM manipulation. React chose something different – much more like the templates of yore – and instead of managing the complexity it introduced a simpler model that eliminated the complexity. That made it really different.

SASS/etc help manage the complexity of CSS, but they don't simplify anything at all. I think there's a scope problem here: the styling and layout of an application isn't just a bunch of selectors and styles in CSS. It's the class names and structure in the code, it's the implied connections between the expected rendered layout, it's a private language of expected elements and their semantics. You can't simplify that just using CSS. We're still waiting on someone to come up with a combined approach that actually works. I personally eagerly await someone's genius idea!


I think that CSS-in-JS is the new jam, especially if you're utilizing a state-driven UI view layer like React or Vue.

Right, but with CSS-in-JS I'm still fundamentally dealing with all the complexity and subtleties of raw CSS, just with better access to variables and such. Bootstrap is a much more disruptive rethink of CSS (ditto flexbox), but they only disrupt a small portion of the CSS surface area.

> TypeScript revolutionized the act of writing JavaScript

I agree with this point.

> React and JSX revolutionized the act of writing HTML

I see nothing revolutionary in React.

> GraphQL revolutionized the act of connecting front end and back end

OData existed before GraphQL and still does.


The line between programmer and dev ops will be moved with more responsibilities on the developer. Cloud services will continue to decrease the barrier of entry while entering a price war, and every team or individual will have a set budget inside the cloud. Integration with IDEs and improved online UI and CLI tools will make these tasks very easy for everyone. We can also expect better/easier/cheaper support infrastructure for these applications. Imagine one click upload of a web app from IDEs with load balancing, monitoring, analytics, and profiling.

I'd also like to see some improvements on programming paradigms to take advantage of multicore CPUs. There are lots of ideas in this space, but no clear front runner ideology. I expect we'll continue to see improvements in this space, and hopefully a winner will emerge.


> more responsibilities on the developer

Adding more responsibilities to a developer will not have a positive impact on the developer's productivity. Abstractions fall apart, and time has to be put into learning/keeping fresh with those technologies when it occurs.


As long as developers are graded on their ability to crank out features most will continue to not care about reliability, alerting, etc., and there will be a need for dev ops/sre/etc.

Javascript performance on mobile continues to get better every year.

For many of the informational based apps, it's totally overkill to create a native experience. One app, responsive views, and your small team is suddenly incredibly productive.


Disappearing middle layers:

The OP mentions "backendless applications", and using Javascript and the browser as your platform feels like the same kind of trend.

Sure, all those layers are still there — OS, TCP/IP, raster — but we've settled on a standard simplified foundation that's good enough and that everybody implements, whether it's Safari-iOS-touchscreen-ARM-LTE or Edge-Windows-mouse-x86-Ethernet.

"Static web applications" likewise — maybe instead of your own backend you call a few SAAS's that you've configured in a GUI rather than written code for. The point is you've reduced the number of layers you have to maintain, while still targeting a huge number of users.

In general, the trick is settling on a foundation. The web has evolved rather than being handed down from on high. What consensus will the next 5-10 years of evolution bring? Or will new platforms appear? Will Fuchsia show up and look a lot like the web?

Maybe the quality of that foundation can be measured by programmer productivity. A web standard that's consistent from device to device is a boon to the whole world. Thank you MDN for telling me what works, and where! https://developer.mozilla.org/en-US/


Native apps have benefits other than performance…

> what do you think will be a breakthrough innovation in programming which will enable programmers to deliver ambitious web/mobile apps at a very high speed

I'm not sure that speed-to-market will be where the breakthroughs are. But constraining myself to that conditional...

The best one I can think of is a library of curated, easily embeddable snippets. Like Helm charts for code. We already have package managers, but as the industry grows more and more with a larger number (not necessarily percentage) of developers churning out sub-par code, companies are going to want to leverage reuse sans fear. You already see this curation with some languages/ecosystems having sets of blessed libraries, but a library is too high of an abstraction and it is language specific. Some group one day will offer embeddable, curated snippets (or entire libraries) that follow a certain set of rules to give confidence to their users and they will apply those rules (with language specifics) across runtimes. In both good and bad ways, some systems may end up being a walled garden of code where they only accept this curated code which, as walled gardens can be, are good for the users (i.e. devs referencing the code) and bad for the devs (i.e. devs writing new components being forced into others' rules).

Would you feel better with a "Guaranteed by StringentCodeTrust(tm)" seal on a library/repository you reference? A non-pragmatic middle manager at Big Corp might.


Web application deployment got way easier than it was 10 years ago. Now now there are plenty of alternatives for CI and testing tools that can automate the process of releasing and verifying code at scale. By scale I mean lots of developers collaborating on the same code base and handling massive loads at the same time.

It's also the field that I think will see lots of improvements and maybe some fundamental disruptions. Maybe it will be GitHub actions and it's right around the corner :)


> Considering this background, what do you think will be a breakthrough innovation in programming which will enable programmers to deliver ambitious web/mobile apps at a very high speed.

Quality, Speed, Price. Pick two.

What is the driving force to "...deliver ... at a very high speed?" It's easy to assume that from the list above quality would be the one to suffer. Doesn't anyone here on HN ever get tired of being everyone's beta bitch?


+1

Kubernetes and other container systems are so widely adopted and accepted that most common system services are solved and require only minutes to integrate. As a result people move back to their monoliths because it is easy and productive

AWS has lots of tools which have made me feel like a super developer. For example, using Lambdas and S3 to build massively parallel processing systems in just a few days.

I think just finally getting rid of unsafe languages, particularly at the bottom. This will I think increase "productivity" of the world at large, by reducing the number of crashes and security holes they have to contend with every day.

I feel like we have finally arrived at the time when more people accept no-one (or at least, no-one outside a tiny set of people) should be using languages with undefined behaviour, or unsafe memory accesses by default (looking at you C and C++).

If "we" want to be taken seriously, and as more of the world relies on computing, we can't be building on a foundation of sand and saying "Oh, as well as every programmer is clever enough, all the time, we'll probably be OK".


But then the next generation of languages needs to make sure that you can actually do what you need to do. With C/C++ I am pretty sure that whatever problem I have can be solved. It may be ugly but you can go very deep down to the last bit. So a safe language should allow you to do that if needed.

Rust is quite good for that. Also, you can always get a very clever person to write a very small piece of C code.

Or perhaps, we just need our programs to be 20% slower? Would you buy a house which was 20% cheaper, but if a plumber was having a bad day there is a significant chance your house will explode if you turn all the taps on at once?


It's not only about speed. I was more thinking about programming close to hardware. There you need all the tricks.

I think C and C++ have too much buy-in from too many people to be replaced anytime soon.

I agree, but I think things are moving. Increasingly few projects are started in C and C++, and people are making serious efforts to convert som existing projects to (for example) Rust.

Also, looking at things like the Linux kernel, they are moving to add as many safety checks to C as is reasonable. Clang is adding a "all stack variables are initialised" mode.


As languages with more interesting type systems come along, more powerful IDE tooling will be possible (e.g. idris 2/blodwen code completion https://twitter.com/edwinbrady/status/1050528305743622147)

More hopefully, maybe new language developers will put more work into easier and easier FFIs? How often have you thought "I wish language X could catch on, but all the useful libraries are in language Y?"


I expect to have a visual dataflow programming environment similar to node-red, Yahoo! Pipes etc. that is touchscreen-friendly, has plenty of community-maintained building blocks and converts to a variety of low-level languages for easy deployment to many platforms. Hopefully, we'll get rid of clunky IDEs/text editors with lots of boilerplate, whole file hierarchies to maintain and all that other ancient stuff invented for 1960's teletypes.

I think flow based programming in general has a lot of untapped utility. I have had good success putting complex data processing workflows in to python with luigi. Tensorflow and ROS are also dataflow(ish) systems that are gaining popularity.

The cost of using and teaching formal methods will continue to come down and program synthesis will become the practical means of automating a lot of lower-level code.

Have there been any major productivity gains in the past 5-10 years? Not really. So I see no reason to expect any in the next 5-10 years.

Agreed. Productivity is just another word for control. The trend is to transfer control away from the developer and to the identity provider (OAuth2), app store (Apple/Google), search engine (AMP), cloud provider (AWS serverless), content author (Wordpress), ad exchange (DoubleClick), CTO (microservices) etc. Not necessarily good or bad, but no reason to expect that trend to reverse.

I could definitely see some type of declarative language + constraints being driven by a voice AI system.

This would let you rapidly prototype ideas


AI powered intellisense and code generation.

Intellicode[0] is an extension that adds AI-powered IntelliSense to Visual Studio. I have used it for the last couple months and it works pretty well.

[0] https://visualstudio.microsoft.com/services/intellicode/


TabNine is the all-language autocompleter. It uses machine learning to provide responsive, reliable, and relevant suggestions.

I will probably try out "Kite" soon.

"Kite" seemed to be a really cool tech, but I read about the dark patterns[1] used by them here on HackerNews and never gave them a try and most likely will not do it anytime soon.

[1]: https://news.ycombinator.com/item?id=14836653


JavaScript precompiler techniques (like https://svelte.technology/) should find their way down to everyday use cases, like abstracting away Canvas DOM programming and GPL and other programming difficulties.

> Frameworks like React & Angular give you decent building blocks to create SPAs, but the learning curve & effort required to create ambitious applications is very high.

Any ambitious application is going to require a high learning curve and lots of effort. Otherwise it wouldn't be ambitious.


- Context-aware IDEs with significantly more powerful versions of today's intellisense tools.

- Compilers with even more powerful implicit behavior- forget automatically creating Get() and Set(), we'll be creating entire inheritance structures with one function definition.


> we'll be creating entire inheritance structures with one function definition.

Hooray, it's just what I always wanted. A deep nested inheritance chain that not even the original implementer really knows well.


Seconded and I'd even like to extend it to say that we'll be moving closer to automatically written code via machine learning.

So we'll see a shift away from "developer specific boilerplate" in the same way libraries have abstracted a lot of basic setup boilerplate.

You'll say "initialize a web service with these models" and the IDE will build out the majority of the project.


As long as the built parts are source code which follows human-readable formatting standards, so developers can fine-tune from the prototype, I agree.

Have you used Haskell? It can automatically generate a lot of things for you if you allow it. Sometimes I really find it amazing.

I think people will see that type safety and functional programming don’t impact productivity much in a business setting, and we’ll see a flight from these languages as examples of horrible legacy code reveal themselves to show that the problem is behavioral / sociological, and has nothing to do at all with enforcement on the code or compilation vs testing.

I also think more job candidates will reject interviews that waste time with unrealistic homework assignments, marathon on-site interviews, etc.

Finally, I think productivity losses and health issues related to open-plan offices will soar, but companies will continue to be disingenuous and act like it improves collaboration or saves money when it’s uncotroversially flat false.


Continued increase in network speeds.

Runtime code analysis: instead of analyzing code at rest, we need to have better tools to tell us what the code actually does. Static typing doesn't help you trace back from an interface to the code that generated that interface. Runtime analysis can potentially be applied across systems, so we can analyze not just one program, but how a whole myriad of systems work together, which is the way most complex systems work.

The decline of JavaScript. Web Assembly will soon have DOM access. It will be game-on for other languages in the browser.

Let's be honest, is JavaScript the winner because it is amazing, or the winner because it was the only choice in the browser. I know my opinion. The other ecosystems, with different strengths, will soon be able to achieve single-language stacks when the playing field is leveled. This will be a boon to productivity.


A quick scan through the comments and I was surprised that no one referenced the No Silver Bullet essay[0].

From where I sit I find that the Cambrian explosion of frameworks, systems and methodologies is causing us to regress in terms of productivity.

[0] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf


Maybe the software industry will discover constraint programming the way that it discovered functional programming over the past decade. If it did, that would almost immediately increase programmer productivity for non-performance-critical tasks (since the compiler could supply the implementation), while making formal methods more easily applicable to production code & supplying the prerequisites for more clever automatic code generation methods.

(In general, software engineers becoming vaguely aware of tech invented in the 1960s and 1970s is a huge productivity boon. If you, as a developer, want to impress your peers, you should take a couple hours to read what software engineers thought was obvious prior to 1980, when software engineers had a strong background in software engineering.)


Any suggestions for sources on what was obvious prior to 1980? Happy to spend hours reading... but hours searching + reading adds up rapidly in a time-scarce environment.

> you should take a couple hours to read what software engineers thought was obvious prior to 1980

If you care about the subject this much and you think that everything was better before the 80s, then why don't you link to or name some sources you value?


An infinite loop of software trends :-)

To answer parent, I actually think better understanding of personality types and how manage people better will come with far better productivity.

The jungle of technology alternatives of how to solve business problems, causes a lot of unnecessary fights. It doesn't matter if you choose React, Angular or Vanilla JS. If your team is fighting each other, your team is not productive.

With this in mind, don't hire people that always agree with you, you always need someone to challenge you.


I think you've hit the nail on the head. The magic bullets are human factors from the interpersonal domain.

As someone who has used (and suffered at the hands of) NSLayoutConstraint, I'd actually love to see a more accessible and generally understandable version of this come into existence. Constraint programming is an underrated area of research at the moment, and I think it can be made so much more powerful by using what we've learned from building functional and declarative languages over the last decade.

These approaches did not fall out of favor because programmers were so much smarter, as you are implying. They fell out of favor because they were impractical at the time. Eaither because there was no language with critical odoption, the computers were not fast enough. The solutions had problems that the designers just assumed would be solved later, or the concepts did not survive their impact with reality.

Time is a flat circle.

The industry oscillates. Each oscillation is a mirage that presents itself as a productivity gain but is in actuality an echo of the past.

Are the techniques we use today really more productive?


I’ve been playing with CHR lately and it’s really dope. I’m not a big fan of prolog

https://en.m.wikipedia.org/wiki/Constraint_Handling_Rules


Slimmer frameworks. I went from jQuery to React to Angular to Vue and they keep getting simpler and more user-friendly. I still think the templating system in Vue is a bit clunky but better than alternatives. Expect responsive frameworks to become even more scalable.

Search, i.e. lower activation threshold for accessing intelligence from strangers elsewhere in the world.

Smarter people, perhaps?

[flagged]


I read this and had to check my browser tab to confirm I wasn't on the main page and this was an article title.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: