Hacker News new | past | comments | ask | show | jobs | submit login
Reflections of an “Old” Programmer (bennorthrop.com)
493 points by speckz on Oct 6, 2016 | hide | past | web | favorite | 325 comments



I'm a bit older than the author. Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same, this time someone has added another C to MVC; or the put their spin on an API for doing X; or you can tell they didn't learn from the previous solution and this new one misses the mark, but it'll be three years before anyone notices (because those with experience probably aren't touching it yet, and those without experience will discover the shortcomings in time.)

When learning something new, I find that this group implemented with $NEW_THING in a completely different way than that group did an implementation with the exact same $NEW_THING. I have a harder time understanding how the project is organized than I do grokking $NEW_THING. And when I ask "why not $THAT_THING instead?" I get blank stares, and amazement that someone solved the problem a decade ago.

Sure, I've seen a few paradigm shifts, but I don't think I've seen anything Truly New in Quite Some Time. Lots of NIH; lots of not knowing the existing landscape.

All that said, I hope we find tools that work for people. Remix however you need to for your own edification. Share your work. Let others contribute. Maybe one day we'll stumble on some Holy Grail that helps us understand sooner, be more productive, and generally make the world a better place.

But nothing's gonna leave me behind until I'm dead.


A little past 50 here. I can relate.

Yet some things have barely changed. Beyond 25 years ago it was Solaris or HPUX, using C, talking TCP/IP and mucking about with SQL. Some of us were sad at SVR4 as we still preferred the BSD view of things. Running Unix and 10 terminals on some 680x0, learning to get efficient in vi. Some preferred emacs. I was excited for the future, especially of hardware and the OS, as I'd seen it on the Amiga. I'd seen some of the crazy ideas being thought about, like we might have this mad thing called VR. That looks interesting. It's insanely expensive and made me feel a bit seasick, but maybe it'll be the cool fad of of 1990. We'll see.

Fast forward to today. Oh. Well there's a lot more switches, and security has changed some things, desktops have pushed ideas because they think they're phones, but I'd never have conceived how similar so much would be. BSD isn't forgotten, mysql grew up and then some. Emacs and vi are still be being discussed. Why aren't old programmers more popular?

I'm still interested in hip new things. I'm losing some enthusiasm though. There's an increasing number I haven't got to, as there's just too many of them. Too much is just fashion. So often I'm struck by how $NEW_THING gives something, but takes something or adds needless complexity, yet manages to be a variation on an aging theme. In frameworks the best $NEW_THING fashion changes hourly.

Leaves me rather disappointed, but I'm still looking. And hoping.


> Why aren't old programmers more popular?

Cost and reduced willingness to work the death march hours that many places want the new hires to work/are willing to work to "prove" their worth.


What about applying concurrency to more problems? So many of the basic ideas were discovered and tried a long time ago, but widespread application of concurrency still hasn't caught on.


Concurrency is still as hard today as it was 40 years ago. Prudent developers avoid it whenever possible.


> Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same, this time someone has added another C to MVC; or the put their spin on an API for doing X;

But that is only for the 'abstractions' they implement; the details are another thing.

I feel the only thing that really changed is that now big companies with PR / marketing departments are on the case while before it was more a geek thing. There is nothing fundamentally different in new frameworks (heck, most are them are just rehashes of others that are many years old with very small improvements), but suddenly the echo chamber makes it out to be vital to your career.

Like React. It's nice but come on... Every frontend programmer I know is fretting that they are not into it enough because you will die or something if you are not. That seems good marketing by Facebook to get such a solid drive behind it so quickly. But it's not needed for anything; you can still just use what you used before and often faster/better (because you are good at what you did for many years right?) and you don't have to bother with learning the latest thing all the time, while, with React, because you are drinking the cool aid, you have to update/refactor/redo stuff often because of changing libraries and new insights. It would make you stressed if think you feel you have to keep up with all of that.

Also, some tools seem to have just been made to look smart. Really, something like Webpack doesn't have to be that obfuscated. It really looks like it has been made like that just to say; 'ah, you just do not get it'. I see people (20somethings included) really sweating in their keyboard when trying to figure out more than the basics of that thing; so why are people using it? Why is it actually popular?


> React

Similarly on Flux: I implemented an immutable store where only incoming messages change state and changes are distributed to widgets in 2005.

Around the same time we started using feature flags in our application.

We also did CI around 2000. Didn't call it that of course.

Never did it myself but heard of TDD long before it became a thing. Again under a different name.

Erlang. Superior technology that forms the heart of new kid on the block Elixir. Built long before the Bay Area CS grads re-invented ageism and claimed young people to 'be just smarter'.

Sure some things are new, but there's just no respect for experience any more.


What about Go? I like go a lot because I feel like they did hit the mark and they did learn from their predecessors. Strong standard library. Clean syntax. Easily compiles to every major platform and architecture (all you need to do is set an environment variable). Everything statically compiled (as a software distributor, this is super nice!). Phenomenal testing framework.

It's not all that innovative, it's just very easy to use, and very well put together.


I tried it and was immediately put off by the monumentally dumb approach to package management (or rather: its non-existence), lack of generics and approach to exception handling.

IMHO If go were invented by anybody other than Google it wouldn't have enjoyed anywhere near the success that it did.

I'm usually extra suspicious of "hot" open source technologies with a marketing budget and/or tech behemoth behind them. It's not that they can't be fundamental steps ahead it's that the marketing can end up causing undeserved popularity.


I'm in the camp that actually prefers -- strongly prefers -- Go's style of error handling over exceptions. It's not a perfect language of course, but when you say it's successful only because Google marketed it, I think you're off by quite a lot. Go is popular in part because of the things you hate that other people like. Also, Google hasn't really done any real "marketing" of Go that I can see. They released it with what amounted to enough fanfare to say "Hey, here's a thing. We think it's pretty cool. Here you go."

And the point of Go is that it isn't trying to be "fundamental steps ahead". If anything, Go is an opinionated statement that programming was better in 1992 than it is today, so let's just fix the annoying shit from 1992 and pretend the word "enterprise" had continued to just mean the ship from Star Trek.


I have similar gripes with Go. I really love programming with CSP techniques and Go does a good job of implementing a lot of this at a lower-level. Clojure also draws from CSP and from Go's implementation of CSP, but lacks some of the nice things Go has going on here at lower-levels, however I end preferring Clojure for so many other reasons. I still would not shy away from Go, but it's far from a perfect language and doesn't really improve on a lot of things that similar languages don't do well.

I think it's not just Google, but the cult of Rob Pike, Ken Thompson, etc. I appreciate their work on so many things, so I don't mean any ill-will. I was a user of ACME editor and Plan9, but just like those projects, Go has so many polish issues in critical areas. The package management alone is ridiculous. The common theme in all these projects is for every good idea, there is an order of magnitude more critical mistakes or divisive ideas that cripple the end result in some way.

Go is alright, but it's not really that interesting to me or a monumental leap. There are a few ideas that others could cherry-pick and put into much better designed languages to take it all a step further in my opinion.


Bingo! :)

I'm a 50+ developer who has increasingly felt that "life is too short" to spend all my time trying to keep on the latest fads.

That being said, I have found myself doing more "fun" programming in Go lately, precisely because, as you said, they hit the mark with how simple and clean the language is, with little of the baggage that comes along with a lot of languages and frameworks (I'm looking at you Rails and React) today.


I am a language freak (hard to admit as who is not these days) but have to say I did not try Go yet... so yes, I must.


As a language geek myself, I've been working a lot with Elixir lately at $WORK, which has been an absolute dream. There's pioneer taxes and bug fixes we have to push upstream occasionally to some of our dependencies, but it's been extremely pleasant so far. Clean syntax, well thought out internals, 30 year old host platform designed for server programming, Lisp style macros (it really feels like a Lisp) , an "explicit over implicit" mentality, immutable everything, and a wonderful "Rails without magic" web framework to go along with it.


That almost sounds exactly like the Java argument from the late 90s

clean syntax standard libraries write once, run anyone


React is a lot more than it seems. The essential problem it tries to solve is componentization: how can you have a large team of varying skill work on separate parts of a page in a web application without one part breaking the others in hard to detect ways?

Many people focus on JSX or virtual DOM and muse about how that just duplicates the browser-- what's the point? But that isn't the point. It's componentization and encapsulation for large teams at scale, and for Facebook it works, it makes them more successful at pulling together disparate modules on a single page than their competitors, so there is some meat to this $NEWSHINY.

Now, if the browser itself had been properly designed years ago to be componentized and encapsulated, we'd have those features without inventing a new DOM or creating a new tagging language or isolating CSS. There is a web components standard that offers much of the same thing... It came out around the same time as React, but still isn't widely used outside of Google Polymer. Truth is these were developed in parallel before they knew of each other.

Anyway, React makes a lot more sense if you think of it as rewritting the browser from outside the browser. In the old days the stack that would have been hard to impossible-- you would have simply started your own browser. And people would have shamed you for not following existing standards, so you would have petitioned the W3C or served on committees to bring your real-world cases for years, being mostly ignored.

Now, devs have the ability to change the browser quickly and find things that work (and also create chaos and suffering).

Meh. Kids. ;)


Honestly mate, you're just talking about the same old, same old.

Every framework is about componentization and encapsulation.

You could take React out of your post and replace it with any framework name in the last 40 years and it would have made 'sense' at the time.

Like the author said, right now a veteran looks at react and sees the mistakes of 15 years ago when we mixed code and presentation.


The Webpack story is interesting because its author isn't (well, wasn't) even a JS developer in his day job or was part of the JS community. I don't hate Webpack but I really have no idea where it came from. Browserify was made by one of Node's ecosystem top contributors, so at least that makes sense.


And webpack's documentation is the spawn of Satan. Going outside the typical use cases is downright impossible. While it has the widest feature set out there, it's incredibly difficult to believe in its reliability. That said, since you can always QA your bundle, reliability isn't as much of an issue I guess...


> And webpack's documentation is the spawn of Satan.

An insult to our Master, Satan would never create something so awful :P


FWIW webpack's docs are getting much better for v2


Late forties developer here.

> Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it.

There's my problem right away. I can't just "take a weekend" to learn some new shiny thing. I have a partner and children who I want to be with at the weekend. And I'd rather go climbing or hiking or even just go out on my bike, than learn another damn api. Twenty years ago I had evenings and weekends to burn. Now I don't.


I'm 26, and modulo the kids, this is how I feel as well. I've been programming professionally for almost ten years, and I feel less and less pressure to keep up with the latest packages on NPM or whatever the flavor of the month is. It's much more interesting to know what has been tried before, and why that didn't stick. Like this newLISP thing, why should we suddenly start writing Lisp? The language is older than C, for god's sake.


> why should we suddenly start writing Lisp? The language is older than C, for god's sake.

If so many "modern" languages still copy features from Lisp (hello C++) then why not use the real thing?

All those nice "new" features which Python and C++ are praised for (lambdas, closures, list comprehensions) have been available for almost 50 years. Lisp was way ahead of its time. It just lacked the hardware power which we have today. It is still ahead of our time. Consider Lisp macros, Genera and MCCLIM. MCCLIM is an interactive GUI for shells which has been neglected for decades. It is being revived right now to make it available for modern Lisp distributions. Modern Lisp just lacks one thing to be the real deal: a native Lisp Machine.

https://common-lisp.net/project/mcclim/


"If so many "modern" languages still copy features from Lisp (hello C++) then why not use the real thing?"

Because if it hasn't "succeeded", for suitable definitions of "succeeded" in 50 years, as an old fart approaching 40 myself my assumption is that there is a good reason. I was much more willing to believe the "everybody else is just stupid and can't see the obvious brilliance" reason when I was younger, but as I've learned more, I've learned just how many factors go into a language being successful. Lisp may have all the features, but there are also good reasons why it has never really quite taken off.

That said, it is also true that using languages that bodge Lisp-like features on to the side after several decades of usage is generally inferior to languages that start with them. It's one of the reasons I'm still a big proponent of people making new languages even though we've got a lot of good ones. (I just want them to be new languages somehow; a lot of people just reskin an existing language with a slightly different syntax and then wonder why nobody uses it.)

But they've all been listed before, and I don't expect me listing them here will change anything, so I'll skip it here.

(I did a quick check on Google Trends; Clojure seems to not be growing. It's not shrinking, but it's not growing. That was Lisp's best chance for growth I've seen in a long time, and that window has probably now closed.)


The reason why Lisp is not mainstream is its power. It makes writing DSLs extremely easy so that everyone can write his own DSL to solve a certain problem. Such style of writing however makes Lisp code unsuitable for group working, and hence unmaintainable. It explains (imho) why there are so many unmaintained Lisp projects.

Clojure has a different problem. It is based on the JVM infrastructure which was (imho) an unfortunate decision. Access to Java features is nice but dealing with Java stack traces is not fun. Also the usual startup time of clojure apps in the range of seconds is not acceptable (Tcl/Tk apps start in a fraction of a second). AFAIK clojure is also not suitable for mobile app development. The clojure developers should have used their own VM, or they should have provided a Clojure/Lisp compiler for native compilation. LuaJit has demonstrated how incredibly fast a suitable VM can be.


There's a lot of truth to this. One thing I've noticed is that there are many different styles of writing Common Lisp; many of the older styles (heavy use of lists as data structures or very heavy use of macros) tend to not work well (at all!) in team software engineering environments. OTOH, if you agree on some coding standards with your team, team software engineering with Common Lisp can be a real pleasure. Of course that's true in any language, but if you're a lone ranger, CL makes it much easier to shoot yourself and your team in the foot than most other languages.


I have felt some of the pain and gripes you have with Clojure and while I am not denying that some of them are very real, I am not sure if you understand the mentality and philosophy behind Clojure as far as most of us who use it seem to have decided at least. Put succinctly, it is to balance practicality with features to get real work done.

Clojure is quite wisely based on the JVM because the idea was not to create a Lisp replacement or a new Lisp, but rather a practical Lisp. Some of the historical problems with Lisp and many other languages like Smalltalk were related to specialized hardware, development environments, and/or ecosystems. Clojure did away with most of these concerns by attaching itself to one of the largest existing environments.

Using the JVM was a wise decision that has/had many advantages not limited to:

- Ability to leverage an already huge ecosystem of libraries, tools, servers, etc. that are well-tested

- Already battle-tested and working package system - this is severely underestimated by some popular languages

- Justifying its existence as a tool along side other JVM languages within an organization, not a full replacement

- Existing, highly optimized JIT

- Reasonably fast, nearly for free

Originally there were some plans to expand more to other environments, for example the CLR, but the JVM got the most attention and in the end this was practical.

As for some of the real drawbacks:

- Clojure alienates Lisps zealots and people who could never grasp Lisp. IMO, this is a stupid person/psycho filter so I don't see it as a drawback but it's worth noting.

- Startup time as you note. Startup time sucks, but it has gotten better and there are workarounds that are the same ones you could use for Java apps traditionally. The mobile dev issue you note is somewhat wrong and without a huge explanation, one option is to more or less enjoy a huge part of Clojure via ClojureScript, or in other words, targeting JavaScript and using things like React Native.

- Garbage. This is an issue of a lot of things on the JVM and Clojure is no exception. You can work around this somewhat with specific coding practices if you need to, but yeah, Clojure isn't going to be good if you can't fathom an eventual garbage collection cycle.

- No TCO, which is mainly a JVM issue if I remember right. This would be great, but it's a tradeoff and Clojure has negotiated this somewhat by providing what I feel is more readable code than when I worked in certain Lisps.

- Some legacy baggage in the standard lib, for example Clojure.zip. Recently they've been more brutal about what can and cannot go into standard libs. Every language though suffers from this a bit.

Regarding developing its own VM, I think you again miss the point about practicality. If Clojure did this, it would have been even more years before its release. Moreover, comparing to Lua is a bad example as it is a very different language (yes, I used Lua professionally). Lua achieves a lot by really keeping it simple, and while there is merit to that, Lua leaves a ton to be desired which I won't get off-topic about here.

So Clojure could work better in its own VM, but then you'd lose the JVM ecosystem along with many other things. I personally would rather have the ability from the beginning to reach from a huge amount of libraries rather than have nothing but what other people writing the language provide or via crude things like calling back into C. There are many talks about all of this, many from Rich Hickey himself. I think you really missed the point of Clojure and I am more of the mindset that I am glad it exists and not in an ever state of flux so that I can use it today, get things done, and not have it relegated to some research language I could never justify in a workplace. And no, I am not a Clojure zealot, I used about a dozen languages in any given year depending on my project and interests. There's a lot I prefer in Lisp over Clojure, but I see Clojure as taking some lessons from various Lisps rather than trying to be the one true Lisp.


I understand why Clojure deliberately focuses on the Java ecosystem. If I still would be in Java development today I likely would use Clojure. If I were in web development I likely would use Clojurescript. Currently I prefer the Emacs/SLIME/SBCL toolbox which is more responsible (and Nim by the way).

My humble two cents to the Clojure team:

1) You should implement a cache mechanism ("save-image") for native code so that at least the annoying startup time of clojure apps is gone. I wonder why Java doesn't support native caches to this day.

2) The weird Java stack trace problem could be solved by providing an individual stack tracer which is close to the source code. I know that this is not possible for Java libraries but at least clojure stack traces should be presented in a more convenient manner.


Hence Pixie [1], which builds a Clojure-inspired language on a JIT VM. I'm not sure what the status is, but it appears to be active.

[1]: https://github.com/pixie-lang/pixie


> Clojure seems to not be growing … that was Lisp's best chance for growth I've seen in a long time

I wouldn't call Clojure a Lisp; it's a Lisp-like language with some interesting ideas, but not really Lisp at all.

As to why Lisp has failed to take hold where other languages have succeeded, I'll use a G.K. Chesterton quote: 'Christianity has not been tried and found wanting; it has been found difficult and not tried.' It's not that Lisp itself is difficult; it's that Lisp is different from lesser languages, and people have difficulty learning something different from what they're used to. Lisp has not been tried and found wanting; it has been found different and not tried.

So … try it! You may be surprised.


https://www.theguardian.com/society/2016/apr/07/the-sugar-co...

"In 1972, a British scientist sounded the alarm that sugar – and not fat – was the greatest danger to our health. But his findings were ridiculed and his reputation ruined. How did the world’s top nutrition scientists get it so wrong for so long?"

Without advancing an argument for Lisp, I'd just note that it's possible - in any discipline - for something to be decades (or more) ahead of its time.

Of course, that wouldn't mean that everything old is avant-garde just because it said it was. But, if people are still or suddenly looking at it, and looking at it hard, I would at least take that as a signal to give it some due consideration. It's apparently fighting age, and winning to some degree.


I agree with the parent about "new features" and I'll add a few things.

I can't stand the mentality of old = bad or not popular = bad. Most of the time, like anything in life, things don't "win" for being the "best" technically or for the most merit. There are usually many factors at play, and marketing, misinformation, stupid people, timing, and more have a huge role. While some of these factors may be good reasons, for programming, many of them are rubbish. We go down the bad paths more than the good ones in computer science it seems.

There are countless technologies that were way ahead of their time and for various reasons didn't end up market leaders today. Lisp is one, Smalltalk is another. Even Object Databases and various permutations of "NoSQL" databases existed for a long time. CSP is yet another that is being "rediscovered" via Go, Clojure, and some other languages. The list goes on for hours. Whether it was/is hardware or many other reasons, these things didn't "succeed." Despite that, it doesn't make a technology or idea useless if it's a good fit for the task, nor does it make it worth ignoring, not learning, or improving.

If there's one thing I've noticed from my considerable years in software dev, it is that most people are wrong most of the time about most things. Look at science - the field has historically been full of naysayers, people who cling to the past for their own agendas, saboteurs, fools, politically or religiously motivated morons, and so on. If we just always went with what the masses say or for that matter, even the so-called "experts," we wouldn't have any scientific advancement at all. Looking back in science, we can also see that many people had discovered or nearly discovered quite a lot. Although somehow we eventually unearthed some of these discoveries, the advancements they created never came in their lifetime or even century or eon.

There's a lot wrong with modern Lisp today, but most of the foundations are solid and it gets pretty tiring of people using Lisp like a pejorative, especially if it's because they lack knowledge about it or do not understand it, or perhaps worse, have never used it.


I'm still amazed that Python is popular. It is in every way a dumbed-down Common Lisp. Common Lisp is also compiled to native machine code, so it runs dramatically faster than Python. (The fact that Python isn't compiled has nothing to do with its dynamism and everything to do with how good the compiler writers of Common Lisp are.)


> I'm still amazed that Python is popular. It is in every way a dumbed-down Common Lisp.

The smaller and more focused feature set is probably one of the reasons its popular. There's also ergonomic reasons, which have a big effect on whether people who aren't initially fully committed tune out or blow up early on in their encounter with the language.


I guess you're right. I get annoyed hearing people say "all the parentheses drive me crazy" but I'm driven crazy by the indentation-bracketing of Python. I'm sure once I got used to it in a good editor it wouldn't be an issue.


I'm 26 and I'm just starting my career. You're making me feel old.


i'm 28 and am still useless. don'T feel bad.


NewLisp is over 25 years old.


> It's much more interesting to know what has been tried before, and why that didn't stick. Like this newLISP thing, why should we suddenly start writing Lisp? The language is older than C, for god's sake.

I wouldn't say Lisp "didn't stick"; it's been in continuous use for half a century. Writing Lisp code would be anything but "sudden".

Of course we can say the same thing about COBOL, but that seems mostly due to the inertia of legacy applications; its target demographic of business applications now favours languages like Java.

On the other hand, new projects are being written in Lisps, and it's still spawning new languages (e.g. Common Lisp, Clojure, Kernel, all the Schemes and their descendents, etc.). This seems to indicate that people want to use/extend Lisp, rather than having to use it (although there are certainly legacy Lisp applications out there, which may or may not be horrendous to maintain).

Also, as Alan Kay points out, Lisps frequently "eat their children": someone invents a "better language", then someone else figures out a way to do the same thing as a library in an existing Lisp. This means that very old dialects/implementations may be perfectly capable of using paradigms/fads/etc. which were only invented/popularised much later, e.g CLOS for OOP or call/cc, shift/reset, etc. for coroutines/async/promises/etc.

In contrast, those languages which truly "didn't stick" are seldom heard of, outside the "inspired by" sections on Wikipedia. Many of are uninteresting, such as machine- or company-specific dialects, which died along with their hardware/sponsor. Others can be very informative, especially regarding "flavours of the month" and "paradigm shifts":

- Simula (arguably the origin of OOP as found in C++, Java, C#, etc.)

- ALGOL (the archetype of lexically-scoped procedural languages, like Pascal, C, Go, etc.). Actually, I still see "idealised Algol" discussed in cutting-edge programming language research, so maybe it still has some life!

- SNOBOL, which enjoyed great success in the world of string manipulation, and now seems to be completely replaced by awk/sed/perl/tcl/etc.

- MUMPS, which integrated a key/value database into the language. Still used, but seems to be for legacy reasons like COBOL (e.g. see http://thedailywtf.com/articles/A_Case_of_the_MUMPS )

- Refal, which relies on pattern-matching for evaluation (now widespread in the MLs (Standard ML, OCaml, F#, Coq, etc.) and Haskell-likes (Haskell, Clean, Miranda, Curry, Agda, Idris, etc.). Also notable for using supercompilation.

- ABC, the prototypical 'scripting language', and a direct influence on Python.

- FP, which emphasised higher-order programming.

- Joy, a pure functional language based on stacks and composition, rather than lambda calculus.

- REXX, widely used as a "glue" language; I think AppleScript has comparable use-cases these days (I don't know, I've never used an Apple OS). Seems to be supplanted by shells and embedded scripting languages (e.g. Python, Lua, JS)

- Self, famous for prototypical inheritance and the origin of the "morphic" GUI.

- Dylan, effectively a Lisp without the s-expressions. Created by Apple, but quickly abandoned.

- Prolog, a logic language based on unification. Still has some users, but didn't take over the world as some thought it would (e.g. the whole "fifth generation" hype in the 80s).


The ICON language is another good one that never took off. It was the successor to SNOBOL and was built around generators and goal-directed programming; two things that are again prominent in stream-based metaphors.

More about Prolog: Prolog, like Refal, had pattern-matching long before the modern functional languages. It's an extremely useful feature. Prolog is also the best database query language ever invented, which is why systems like Datalog and Datomic borrow from it heavily.


Another (very) late forties dev here too. Your use of the word "burn" concerns me. I'd prefer the word "invest".

As I've said before on this subject, if you're careful what shiny new thing you bother to learn, and restrict yourself to doing it every couple of years rather than every weekend, I find the family and kids will never notice.

And anyway, at our age time flies so damn fast the kids'll be grown up by next weekend anyway :)


Early fifties here.

> Your use of the word "burn" concerns me. I'd prefer the word "invest".

I agree, but... sometimes it really is "burn". I don't keep up with front-end tech these days. If/when I need to work with it I'll learn it, and not before. If I spent time getting up to a real working knowledge now, there's every chance that in, say, 4 years when I needed to work in that space that I'd have to start learning from scratch again rather than brush up.

I do spend time learning new (to me) things, but I'm very selective. I devote most of my time to learning things that will give me alternate ways of doing things, rather than learning every nut and bolt of the latest framework.


Good point. Appreciate your concern, but I wasn't being clear.

By time to "burn" I was referring to all the time I had in my twenties: when spending an evening or a day or a weekend doing nothing in particular didn't seem like much of a waste.

Whereas now, "me-time" comes in much smaller units (maybe an hour after the kids are in bed and before I get tired). And I'm very careful to invest it: I recently spent eight months studying UX on Coursera, and am getting back into electronics and Arduino after a break, for example. I just don't have big blocks of time any more. Certainly not whole weekends.


I'm 39. There's no denying tech requires you to constantly keep up. But in my opinion, it's not as bad as most make it out to be. For the most part I read blogs while lying in bed at night (about 15 minutes or so), and I always have a side project going that I get to here and there. I still maintain a healthy life balance and get to enjoy things outside of tech just fine.

To be fair, I don't have kids. I'm sure that's a huge factor.


The frequency with which "hot new things" appear isn't every week. I'd venture a guess and say one weekend every two months. I don't know where you live, but here the weather isn't perfect for 52 weekends of the year. Some weekends it's not even worth sitting in front of the computer - total laziness with the family is ideal.


Soon in my forties. When I need to take a few days to look into something, I do it as part of my work. That's part of the maintenance of having a senior and experienced programmer.

Some times, my boss asks me about a new fancy tech. "I'll look into it" means I'll take a few hours of my time to give a good appraisal of it.


While I'm not old, I have been a professional developer for 17 years and feel that first downwards curve in the author's graph (and I get frustrated at the younger programmer's reinventing the wheel from time to time).

However, I never find myself confronted with things that are not genuinely new and interesting to learn and work on. A lot of these things are not new at all, but they are new to me: statistics, linear algebra, machine learning, compiler construction, PL research, model verification, graph algorithms, calculus, engineering modeling, vectorized programming, GPU programming, geometrical computation, and it just goes on and on. In each of those, you will have the fads of the day, the current hot framework, the second current hot framework, the old framework that works better. At the end of the day, I get the textbook, look up university courses on youtube, pick whatever framework shows up first in google, and spend time on the fundamentals. As a crude example, I may have to look up how to do a dot product in numpy / matlab / mathematica / c++ / R every second day, and when i learn something most of my programming is SO-driven, but I also can perfectly write a dot-product in clojure/factor/elixir/arm assembly if you asked me to, and then do a vectorized dot-product in CUDA/Neon SIMD/VHDL because I spent time on the fundamentals. The best thing that happens is when you start to see how one technique appears in so many different fields (for example SVD).

Nothing is new, but most of it is new to me.

After that I do spend a significant amount of time researching my tools (IDE, supporting apps, build systems, frameworks, compilers, programming languages), but that's the craftmanship part of it, and is kind of like doing the dishes and going to the farmer's market to have a nice kitchen to cook in and great ingredients to cook with.


There's an eternal war between "avoid reinventing the wheel" and ahistorical "not invented here", isn't there?

Because the profession is so heavily skewed towards the young and self-taught, people don't seem to know about the solutions of a decade ago and their merits and demerits. This is partly why software componentisation as "parts catalog" has never really taken off. It's easier to reimplement or independently reinvent something than it is to find the right existing solution and learn it.

It's as if it were easier to turn every bolt you needed on a lathe than to go to the shop for them.

(The closest might be npm, but then we see what an engineer would call an excessively large bill-of-materials, as trivial projects pull in thousands of dependencies)


Software componentization never took off because every attempt at it either leaks abstractions like a sieve or is so purely functional as to be impractical for real world use.

Shall I list the componentization tech of yore? CORBA, OpenDoc, Java Server Faces, COM, DCOM (oh god, CORBA again!), SGML (the original component framework), XML zoo (oh god, CORBA a third time).. JSON zoo, (a flippin' fourth time, are you kidding?!)

componentization is something we're still figuring out. Alan Kay and SmallTalk were the closest to get to it (see previous comment about practicality though) and the mainstream just now is starting to think of JS and Ruby as "novel". NPM? Please.

We have a long way to go before componentization actually works. So yes, I guess I agree that it's simply easier to reinvent things to a specific context than solve the problem of sharing code context-free.


You forgot microservices. I'd forgotten Java Server Faces, was that related to Java Beans?

I'll join you in a glass of "oh god, CORBA!" At least one good thing about the web is that people have given up hoping that RPC could be transparent.



CORBA and DCOM were really not great but I don't think you're qualified to have a go at these things if you think that SGML is related to DCOM or that XML and JSON are "componentization tech".

I'd also note that despite how unfashionable it is and was, COM was a remarkably successful component framework. A lot of Windows apps use COM heavily and not because they were required to do so - they componentized themselves using COM because they wanted to and it delivered real value to them.

In addition, I'm not sure how you are defining "component", but the term is rather similar to library, and modern apps frequently pull in enormous quantities of libraries. It worked out OK, actually.


That's the constant battle. "Oh, this library kind of does what I want, but this behavior kind of sucks, and this bug hasn't been fixed... Do I shoehorn it in and write all the glue and adapter code? Fuck it, I'll just start over, and build yet another grid component, or datepicker, or rich-text editor."


"This is partly why software componentisation as 'parts catalog' has never really taken off."

Well, that and the fact that OO turned out to not be a very good mechanism for building the parts catalog on. In the end I'd judge it as only slightly more successful than procedural programming on that front.

For instance, Haskell's "part's catalog" is somewhat smaller than other languages. But the parts do what they say they will, and generally go together pretty well once the community is done chewing on them and fixing them up. (Here I mean fundamental tools like parsers or text template systems, not merely "libraries to access this API" or "bindings to this particular library.) All those restrictions that go into Haskell are there for a reason.


>There's an eternal war between "avoid reinventing the wheel" and ahistorical "not invented here", isn't there?

There is a iternal itteration between reading disgusting Documentation (nonexistant one) and finding the hidden shortconmings of existing solutions - and just doing the only "open source" that is accepted in every company- rewrite it yourself.


   eternal war between "avoid reinventing the wheel" and ahistorical "not invented here"
Did you intend to say something else? Because that's the same thing twice: you reinvent a wheel because the other wheel was 'not invented here', so if you avoid reinventing a wheel, you are suffering from NIH.

It's hard to avoid reinventing the wheel if all you know is what was invented 'here' and 'recently'.


The more I learn about fundamentals (recently non determinism + predicate logic) the more I realize the trends are shallow (or not so shallow) obfusction of the same basic blocks. And pardon the following hint, I found FP a pretty good vehicle to express these blocks in an abstract manner. Binary operations, composition, accumulation, iteration, induction, state transitions.


What's the last truly new thing you can think of? I'm interested because I am young (22) but have studied programming language paradigms and history and I also agree a lot of "new" stuff is old.


New stuff: Machine learning that works. Rust's borrow checker. 3D SLAM that works. Voice input that works. Lots of image processing stuff. Machines with large numbers of non-shared-memory CPUs that are actually useful. Doing non-graphics things in GPUs.

The webcrap world is mostly churn, not improvement. Each "framework" puts developers on a treadmill keeping up with the changes. This provides steady employment for many people, but hasn't improved web sites much.

An incredible amount of effort seems to go into packaging, build, and container systems, yet most of them suck. They're complex because they contain so many parts, but what they do isn't that interesting.

Stuff we should have had by now but don't: a secure microkernel OS in wide use. Program verification that's usable by non-PhDs. An end to buffer overflows.


Old timer rant:

IMO Machine learning mostly doesn't work (yet) with a couple exceptions where tremendous amounts of energy and talent have made that happen. For example, image processing with conv nets is really cool, but the data sets have been "dogs all the way down" until very recently. And for the past few years, just getting new data and tuning AlexNet on a bunch more categories was an instant $30-$50M acqui-hire. Beyond a few categories, its output amuses and annoys me roughly equally.

But the real problem with ML algorithms IMO is that they cannot be deployed effectively as black boxes yet. The algorithms still require insanely finicky human tuning and parameter optimization to get a useful result out of any de novo data set. And such results frequently don't reproduce when the underlying code isn't given away on github. Finally, since the talent that can do that is literally worth more than its weight in gold in acqui-hire lucky bucks, it doesn't seem like there's a solution anytime soon.

Voice input? You gotta be kidding me. IMO it works just well enough to enter the uncanny valley level of deceiving the user into trusting it and then fails sufficiently often to trigger unending rage. Baidu's TypeTalk is a bit better than the godawful default Google Keyboard though so maybe there's hope.

GPUs? Yep, NVIDIA was a decade ahead of everyone by optimizing strong-scaling over weak-scaling (Sorry Intel, you suck here. AMD? Get in the ring, you'll do better than you think). Chance favored the prepared processor here when Deep Learning exploded. But now NVIDIA is betting the entire farm on it, and betting the entire farm on anything IMO is a bad idea. A $40B+ market is more than enough to summon a competent competitor into existence (But seriously Intel, you need an intervention at this point IMO).

Machines with lots of CPUs: Well, um, I really really wish they had better single-core CPU performance because that ties in with working with GPUs. Sadly, I've seen sub-$500 consumer CPUs destroy $5000+ Xeon CPUs as GPU managers because of this, sigh.

Container systems? Oh god make it stop. IMO they mostly (try to) solve a wacky dependency problem that should never have been allowed to exist in the first place.

The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop.


"The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop."

One of my favorite features now on my iPhone is "Reader View". Have a new iPhone 7, which is very fast, but some pages still take too long to load, and when it finally does, the content I want to read is obscured with something I have to click to go away, and then a good percentage of the screen is still taken up by headers and footers that don't go away. The Reader View loads faster, and generally has much better font and layout for actually reading the content I'm interested in.

All of which is to say, the sole purpose of what a lot of web developers are working on today seems to serve no purpose other than to annoy people.


> his provides steady employment for many people, but hasn't improved web sites much.

Just a week ago I made the startling discovery that FB's mobile web app it's actually worse than a lot of websites I used to visit at the end of the 90s - early 2000 on Netscape 4.

Case in point, their textarea thingie for when you're writing a message to someone: after each letter push there is an actual, very discernible lag until said letter shows up in the textarea field. So much so that there are cases when I'd finished typing an entire word before it shows up on my mobile phone's screen. I suspect it's something related to the JS framework they're using (a plain HTML textarea field with no JS attached works just fine on other websites, like on HN), maybe they're doing an AJAX call after each key-press (?!), I wouldn't know. Whatever it is, it makes their web messenger almost unusable. (if it matters, I'm using an iPhone4).


FB does auto-complete for names, groups, places, and so on. So for each char it does a callback to see if it should display the dropdown. Using a swype style keyboard ia a bit nicer because you're only adding full words at a time.


AFAIK they log everything you type, even if you don't submit it. So maybe that has something to do with it?


Hasn't improved websites much? I remember the days of iframes and jquery monstrosities feigning as web "applications". The idea o a web-based office suite on the web would have been laughable 20 years ago.

My guess is you haven't actually built a real web application. The progress we've made in 20 years is astounding.


Another way to look at it is that, even after 20 years and huge investment from serious companies, we can still only build poor substitutes for desktop applications.

Don't get me wrong, it is amazing progress given the technology you have to fight. But in absolute terms it's not that great.


This is my view exactly. In the past 10 years I've developed both a web app [1] and a cloud-based native app [2]. Developing the native app was by far the more enjoyable and productive experience.

The great thing about native development is the long-term stability of all the components. I have access to a broad range of good-looking UI components with simple layout mechanisms, a small but robust SQL database, a rock-solid IDE and build tools - all of which haven't changed much in the past decade. Plus super-fast performance and a great range of libraries.

To put it in terms of the article: the half-life of native desktop knowledge is much longer than 10 years. Almost everything I learnt about native programming 10 years ago is relevant now.

Unfortunately, the atrocious deployment situation for native apps is also unchanged in 10 years (ie. "This program may harm your computer - are you really sure you want to run it?"). But on the other hand having a native app has allowed me to implement features like "offline mode" and "end-to-end encryption" that would be difficult or impossible in a web app. This has given my business a distinct advantage over web-based alternatives.

[1] https://whiteboardfox.com

[2] https://www.solaraccounts.co.uk


I really am very glad to hear someone writing in public what I've been mentioning to colleagues and all who would listen for the past few years.


I've never built a web application of any description. As a user, what are the improvements that I should be looking for that have been introduced over the past 10 years? It was longer ago than that that AJAX started getting big, and as far as I can tell, that was really the last key innovation for most apps: Being able to asynchronously load data and modify the DOM when it becomes available. I'm aware of other things like video tags, webgl, canvas, and such that allow us to replace Flash for a security win, but that seems replacing a past technology just to get feature parity with what we had a decade ago.

Everything else seems like stuff that makes things better for the developers but not much visible benefit to the user. I can understand where a comment about the web not being much better would come from, on the scale of a decade.

Go back 20 years, and you're talking about a completely different world; frames, forms, webrings, and "Best viewed with IE 4.0". But if '96 to '06 was a series of monumental leaps, '06 to '16 looks like some tentative hops.


There's actually a fair number of new features in the web today that you couldn't do in 2006 - offline access, push notifications, real-time communications (without a hack that breaks on most firewalls), smooth transitions, background computation, the history API, OS clipboard support, accelerometer access, geolocation access, multi-touch, etc.

Few websites use them effectively yet, at least in a way that benefits the consumer (several are using them to benefit marketers). This could be because developers don't know about them, consumers don't care about them, or perhaps just not enough time has passed. XHR was introduced in 1999, after all, but it took until 2004 before anyone besides Microsoft noticed it.


My first install of Netscape Communicator had an offline mode, and one of my colleagues recent told me how they had fully working live cross-browser video conferencing in 98. Their biggest competitor was WebEx, which is still around.

I think many of us underestimate what was possible to do in browsers. What has happened is that these features have been democratised: what took them months to build I can now pull of using WebRTC in the space of a weekend.


My very first software internship ever was getting streaming video to work for the Air Force Center Daily at MITRE Corp, back in 1997. I did it with RealPlayer, Layers in Netscape (remember them?) and DHTML in IE.

The thing is - the polish matters. You can't do viable consumer apps until they actually work like the consumer wants, which is often decades after the technology preview. You could emulate websockets using IFRAMES, script tags, and long-polling back in the late 90s, but a.) you'd get cut off with the slightest network glitch and b.) you'd spend so much time setting up your transport layer that you go bankrupt before writing the app.


Thank you for the explanation. Some of those I've known about (but they didn't come to mind in my original comment), and some are definitely things that I would've taken for granted (coming from a native application programming background). Those are all features added to web standards and implemented in browsers though, right?


They're all in web standards. Browser support varies but is generally pretty good for most of them.

They're taken for granted in native application programming, yes, but the big advantage of browsers is the zero-cost, on-demand install. This is a bit less of an advantage than it was in 2003 (where new Windows & Flash security vulnerabilities were discovered almost every day, and nobody dared install software less their computer be pwned), but there are still many applications where getting a user to install an app is a non-starter.


Most programs that existed before (lets be real.. programs were replaced with web apps now) defaulted to being offline only, or would sync. I miss those days.


> I miss those days.

Ditto. I like having a copy of a program that no one but me has access to modify, and I like that I don't have to rely on my ISP to use my computer. If I like a program, I don't want it to change until I choose to change it. I don't want to be A/B tested, marketed to, etc. I'd rather buy a license and be happy =)


> The progress we've made in 20 years is astounding.

And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now. In fact, I'd say that "open link in new tab" is dead on most of the new web "applications", I'm actually surprised when it works. The same goes for the "go back with backspace" thingie, which Google just killed for no good reason.

Copy-paste is also starting to become a nuisance on lots of websites. sometimes when I try to do it a shitty pop-up shows up with "post this text you've just copied to FB/Twitter" or the app just re-directs me to somewhere else. It reminds me of the Flash-based websites from around 2002-2003, when they were all the rage.


> And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now.

Use the basic HTML version. It's worse in a few ways but better in most others. Including speed.


'Back with backspace' was changed to CMD+left-arrow assumedly because a simple backspace can change the page unexpectedly for someone who thinks they are in a text field.


'Back with backspace' has been standard in the Windows file manager for as long as I can remember. Given that the file manager was the moral precursor to the browser of today it would have been nice to retain it.

Back with backspace!


Actually, it didn't go back on Windows XP. After using Windows 7 for 4 years, I am still not used to this 'new' behavior.


I sit corrected, thank you.


I know the reasons, I just think they're stupid. They've replaced one simple key-press with two non-intuitive ones. On my keyboard I have to move both my hands down in order to press the 2 keys, the backspace key was very easy to reach without moving my hands.

On top of that I actually have no "CMD" key on my keyboard, I have a "Ctrl" key which I assume is the same as "CMD" (I also have a key with the Windows logo which I had assumed it was the CMD key, I was wrong). KISS has gone out of the window long ago.


Alt + left arrow still goes back on ff and Alt + right arrow goes forward. It's been that way for quite a while.

The outlook Web app on the other hand sometimes blocks backspace from deleting a character, presumably to stop you inadvertently jumping back from inside a text field. This is only "on" when inside a text field in the first place, so if MS could do it, I don't see why Google's better engineers couldn't.


I've lost so many posts from hitting backspace without realizing I didn't have the input box focused I'm more than happy with this trade off.


Browsers have improved greatly and new web development frameworks are necessary to make use of those improvements but the actual process of building usable web application doesn't seem that improved. It's certainly not any easier to achieve pretty much the same results.

> The idea o a web-based office suite on the web would have been laughable 20 years ago.

What's laughable is how much effort has gone into rebuilding something in this platform with a result that is nearly the same (but worse) as what existed 20 years ago.


What kills me about a lot of the technology we use today is that few people, at least in positions of power are brave enough to pause every now and then and say, "WTF are we doing?" So many technologies live on because of so-called "critical mass," big investments, marketplace skills, and other things that are about anything other than the technology itself. These of course are mostly practical reasons and important ones at that, but at some point it becomes impractical to continue to cling to the practical reasons. IMO, the ability to do something truly different to make an advancement is often what separates innovators and intelligent people from the rest. When someone does try to act, the market essentially crushes them accordingly, making most efforts null and void, and dulling the senses and hope of everyone else watching, warning them not to try anything themselves.

x86 CPUs, web browsers, popular operating systems, and so on are all examples of this problem. At some point I really wish we could do something different, practical reasons be damned. It's sad that as many cool, "new" things we have, some of the core, basic ideas and goals are implemented so poorly and we are effectively stuck with them. This is one reason I hate that almost all software and hardware projects are so rushed, and that standards bodies are the opposite, but with only the bad things carried over. The cost of our bad decisions often weighs for much longer than anyone could imagine, just ask anyone who has designed a programming language or something major in software ecosystems.

As much as I enjoy all the new, shiny stuff, it makes me sad thinking about BBSs and old protocols Gopher that represented the old guard, alternate routes, and the fact that we really haven't come that far. Overall things of course are a lot better, but in many ways I often feel like we're treating the symptoms and not the cause or just going around in circles.

I could go on, but the rant would be novel length.


I find it super disappointing that Android, from an operating systems perspective, is so terrible. It's the newest popular operating system but it's no better than Windows, iOS, or Linux. It's a mess of passable API's, average security, etc. Rushed to market for, of course, practical reasons.

I don't see any opportunity in the future for any person or company to take all the lessons learned in the last 50 years and build something new that takes it into account.

Same with browsers; It's only now that we kinda know what a browser really needs to be but there's no way to start from scratch with all those lessons and build a new kind of web browser. There is always going to need to be what they currently are and build on what was already done.

I understand why, but it's still kind of sad.


20 years ago there was exactly one multiplatform office suite, and if you think StarOffice was better than the current incarnation of Google Apps I'm not sure how to respond to that outside of laughter.


I also kind of question whether or not a web-based office suite is truly multiplatform. For the most part, it doesn't interact with my desktop so it's really a single platform.

It's almost like saying Microsoft Word is cross-platform because I can RDP into a Windows machine from Linux. It's not really part of Linux, it needs a client to access an application running on a remote server. The only difference is how complex the client is.


Is the web browser the best technology to achieve "multiplatform" for an office suite? It makes sense from a purely practical sense but technologically it's pretty terrible.


Practicality wins pretty handedly here. Technology will always improve and web based applications will become more and more feasible as a result.

The flip side of that equation is that poor practical choices never improve because there will only be more platforms to target.

If we made development decisions based on technological constraints alone, how is it supposed to improve?


I wrote pivot table functionality in XSLT and XML for IE6. Pretty much 10 years ago. It's not that you couldn't do it, it was that it wasn't worth it.

Your whole multiplatform thing is disingenuous because back then there really was only 1 platform. Windows. So you've conveniently forgotten about the lotus suite, etc.

I also think you're vastly overestimating how far we've come in that timespan. Like v8 was pretty much 95% of the improvements, simply because you could do more than 1000 loops in JavaScript without killing the browser.

And yet today it is still harder to make a decent web app than it was in VB6 15 years ago.


I call BS. The progress that was made in the late eighties/early nineties was much more astounding. Going from character screens to fully event driven, windowed graphics mode was a much greater and more impressive change in a shorter time-frame.

the that time you had to learn a lot of new stuff in a short time too.


Well, in early 2000-s suddenly most of computers in the world became connected to each other and then after some time everybody got a powerful and networked computer in their pocket. I think this is pretty impressive too.


What exactly are you calling BS on? Nothing I said conflicts with your comments at all. It's not like progress can only happen once...


Have you tried to debug babelified react site with sourcemaps and whatever ...

I spent four hours just to find that the latest and greatest express don't have the simple global site protection with a password (that it had in version 3) like with .htaccess - it is just not possible anymore. There were no elegant solutions.

There may be some marginal progress while doing complex stuff, but doing the simple is harder and harder with each passing year.

Here is a simple question - is making working UI now easier than with MFC circa 1999. If the answer is no- than that progress is imaginary.

Every new thing is strongly opinionated, doesn't work and relies on magic. Debugging is nightmare and we have layers upon layers of abstractions.

Please for the love of Cthulhu - if any of you googlers, facebookers, twitterers read this - next time you start doing the next big thing let these 3 be your guiding lights - the code and flow must be easy to understand, it should be easy to debug, it should be easy to pinpoint where in the code something happens - all of the frameworks' benefits become marginal at best if I have to spend 4 hours finding the exact event chain and context and place in the framework that fires that ajax request.

/rant over


Do the web based office suites use these new libraries and frameworks? (React, Angular, Webpack etc) Genuine question as I thought that companies like Google use their own in house libraries like Google Closure which have been slowly built up over many years.


Google Apps use Closure, and after taking a peek at the source of Microsoft's online office apps, they appear to be using Script# or a similar C# to JS tool, as I see lots of references to namespaces, and lots of calls to dispose() methods.

iCloud's office apps use Sproutcore, which eventually forked into Ember (though the original Sproutcore project still exists).


Probably not Google Apps. But Angular is a Google developed framework.


In 2005, I worked on a crud app framework that used data binding driven from the server and doing minimal partial updates of the web page in a way that is much more efficient both in resources and development time than anything currently mainstream. That was the second version of something that used XML islands to do data fetching before XHR was introduced by Microsoft. IMO most mainstream JS dev is just barely catching up with what smart devs were doing in individual shops.


   Program verification
We really tried. It's hard, and verification of even simple programs easily hits the worst cases in automation. It will take another decade before program verification is digestible by mainstream programmers.


Spot on about machine learning. That's something that never worked all my life, but it seems like some of these young turks might be onto something there... I should probably sit up and pay attention to that one.


I think the fact that we'really starting to make native apps using Web technologies speaks to the progressadele for webapps.

The whole HTML rendering pipeline with advanced scripting support is really an innovation in itself. The downside is speed, but that's where we innovated the most, VMs for JavaScript.

Hopefully Web Assembly will really show the improvement we've made.


I work on navigation software (http://project-osrm.org/).

Many of the algorithms we're implementing (or at least considering) only exist in recently published papers, or sit behind unpublished APIs. There have been huge improvements in graph route-finding algorithms in the last decade, so much of it is new, interesting and it's far from run-of-the-mill implementation.

I'm 38 - I spent the first many years of my career doing CRUD development, first in Perl (late 90's), then Java/PHP (2000's). I skipped the JS craze, and now I'm enjoying my work more than ever improving my C++ skills (last time I touched C++ was 98, modern C++14 is a huge improvement) and working on backend, specialized algorithm implementation. It's great!

Experience is the best teacher. Kids don't listen to their parents, new developers don't listen to the greybeards until it's too late. This is the way things are :-)


>> backend, specialized algorithm implementation

Sounds great - how does one go about finding that sort of work in the industry?


I went looking. After years of cruft work, I took some time off to pursue other interests. When I was ready to come back to software development, I took a good hard look at the things I actually enjoyed doing, then went looking for organizations that did that stuff. I was fortunate to have the breathing room to be deliberate in my search.


I dropped a few pins and I'm genuinely impressed how fast it is.


A history of new things geared toward web app developers, starting with relevant popular technologies:

Late 1970s: microcomputers, explosion of BASIC and ASM development

Early 1980s: proliferation of modems, BBS's become big, Compuserve becomes big- people able to read news online and chat in real-time (but not popular like much later). software stores, software pirating, computer clubs, widespread use of Apple II's in schools. Microsoft Flight Simulator released in 1982 is first super-popular 3D simulation software.

Mid-1980s: GUIs- Macintosh 1984 based on ideas from Xerox PARC.

Late 1980s: Graphics had more colors, more resolution, faster processors. So- cooler games. File servers. 1987 GIF format, 1989 GIF format supporting animation, transparency, metadata- not that popularly used though- was a compuserve thing.

Early 1990s: Internet, realistic quality pictures, webpages/browsing, global file servers. Mosaic web browser. Most pages involved horizontal rule dividers that might be rainbow animated GIFs. Bulleted lists. Under construction GIFs were popular. Linux. JPEG format. Netscape. Blink tags.

Mid 1990s: Windows 95 (with Winsock). IE vs Netscape. IE had marquees. VBScript. (Mocha->LiveScript->)JavaScript. Applets. Shockwave. WebCrawler search. Altavista search. OOP pretty solidly how you should program now with C++ having been around for a while and Java slow but write once/run anywhere and OOP. Apache webserver. CGI: can email from webpage.

Late 1990s: ActionScript. Google search. CSS. Extreme programming. Scrum. JSP. Some using ORM via Toplink. Java session vs. entity beans. IIS. Java multithreading. Amazon gets patent for 1-click ordering. AOL instant messenger. PhP.

Early 2000s: ASP. .Net/C#. Hibernate ORM (free). Choosing between different Java container servers.

Mid 2000s: Use CSS not tables. Rails.

Late 2000s: SPA and automatic updating of content in background via Ajax. Mobile apps. Mobile web. Scala. Cloud computing start. VMs. Streaming video mature. Configuration management via Chef/Puppet.

Early 2010s: Cloud computing standard. Container virtualization. Video conferencing is normal- not just big company office thing. Orchestration of VMs more normal.

Mid 2010s: Container Quantum computing starts at a basic level (not important yet).

Note how I can't really thing of anything recently that has to do with new things in webdev.


Dejavu:

> Early 2010s: Cloud computing

1960s: Client/Server Architecture. Big servers and small clients.

> Mid 2010s: Quantum computing

before 1950s: Analog Computers

https://en.wikipedia.org/wiki/Analog_computer

There is nothing new under the sun. Analog computers passed away because they were not usable. Ok, quantum computing may be different but their practical use is also questionable.


> Dejavu: [...] 1960s: Client/Server Architecture. Big servers and small clients.

This is right and wrong at the same time. Right, because the Cloud reuses some basic concepts from the mainframe era (e.g., virtualization), which had been neglected for some time. Wrong, because writing your application to run efficiently on a mainframe is totally different from writing your application to run efficiently on Cloud infrastructure. Also, there is no thing such as small clients anymore, mobile apps and Web frontends are nowadays as complex as the usual 1980s fat-client software.

IMHO this is a very good example for technology not making circles, but evolving in spirals.


> there is no thing such as small clients anymore

This is also right and wrong :-) Right regarding your perception, wrong regarding relative power. 1960s clients were small compared to today's small clients. However, 60s server were also small in relation to cloud servers. Today our small clients provide browsers and stuff like that but they aren't useful without servers. They can't run top-notch 3D games without high-end servers. The third wave of C/S will be in the area of A.I. with (small) clients which will possibly as powerful as today's cloud servers.


Trends:

Ajax, Long polling, WebSockets

jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React

Javascript debugging tools, profiling, 60fps, responsive pages, AMP

RSS, Web Push, WebRTC

HTTP Auth, Cookies, oAuth, new social protocols

Perl, Java, PHP, Node.js, Go


Thanks! You caught some ones I missed, so here are some edits and responses to your list:

1. I didn't mean to put "container" in front of quantum computing.

2. I didn't mention history of certs or encryption, as I think that security is often a feeling rather than a reality. I'm not sure that "HTTPS everywhere" plugin and then movement in early 2000s was innovation more than it was tightening up security after Firesheep.

3. Yes, I should've included WebSockets over long polling in Early 2010s.

4. Yes, RSS mattered- 1999/Early 2000s.

5. I probably shouldn't have mentioned OOP, etc. as I didn't mean for methodology to matter, since it doesn't matter to users. Similarly debugging tools don't matter for innovations that users see.

6. Yes, fluid layout, grid layout, and responsive design in Late 2000s (though Audi had responsive in 2001).

7. jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React - none of the implementation details of these things matter. The only things that matter are how things appear to the user- like whether a page has a clunky refresh or smooth transition and whether things update automatically when they are changed elsewhere. Also, Applets, Flash, frames, and the move to JS all screwed the visually impaired.

8. Cookies mattered because they were used to track users in ways they didn't want to be tracked. People disabling JS for a while mattered. US announcing Java was insecure mattered. Flash and Flash being abandoned mattered.

9. Forgot to mention frames in Mid/Late 1990s.

10. As you mentioned oAuth, SSO becoming a big deal in the Late 2000s with Facebook, Google.

And I should have mentioned blogging, microblogging, move of much of the web to Facebook, Tor/private web, peer sharing and impact on music industry as well as impact on the value of well-created data and applications vs. the value of constantly creating data and making data available and clear.

Despite all of the things I missed, the point is that the things that really matter aren't new libraries and frameworks- they are technology and how the world uses it. If a user can't tell a positive difference between something you were doing 5 years ago and today, then you didn't really innovate.


Not exactly "new" but: Being able to spend ~$30 on a system powerful enough do everything faster than you could years ago for thousands of yesterday dollars. The feeling of not being limited by computing power for every thing is incredible. Vice versa, feeling like you aren't imaginative enough to utilize modern day power available, it's a great time to push your mind harder to take advantage of it.


Anything to do with computer vision controlled navigation is a good source of new challenges and new discoveries / inventions.


It's not long before you start thinking "f*ck the current new thing, what's the next new thing going to be? I'll try and spot that then jump onto the band wagon earlier"

The new Devs are basically doing this by default. They are early adopters on the hype cycle and so leverage it for renum since it disrupts supply every time.

Play the game old bean, it hasn't changed :)


MVC is from the 70s.


I get where the guy is coming from, I'm right there as an old guy.

On the other hand, I think there is a bit too much fatalism in the article. Sometimes the kids are being stupid, and they need to be told so.

The vast majority of web apps could be built in 1/10th the code with server-side rendering and intercooler.js. All this client-side crap is wasted when you are trying to get text from computer A into data-store B and back again. It's the front-end equivalent of the J2EE debacle, but with better logos and more attractive people.

And people are starting to wake up[1][2]. It's up to us old guys to show the way back to the original simplicity of the web, incorporating the good ideas that have shown up along the way as well as all the good ideas[3] that have been forgotten. Yes, we'll be called dinosaurs, out of touch, and worse.

Well so what? We're 40 now. And one of the great, shocking at first, but great, things about that age is you begin to really, truly stop giving a fuck what other people think.

Besides, what else are we going to do?

[1] - https://medium.com/@ericclemmons/javascript-fatigue-48d4011b...

[2] - https://hackernoon.com/how-it-feels-to-learn-javascript-in-2...

[3] - http://intercoolerjs.org/2016/01/18/rescuing-rest.html


Old guy here too. When I was a younger guy and first saw the web, after making a few web pages I naturally wondered "how can I program this?" At the time, the old guy programmers were old mainframe / terminal architecture guys, where a dumb terminal (teletype or monochrome) sent text to and from a smart "central processing unit". It was natural for them to follow server-side patterns, and invent PHP, NetObjects, ColdFusion etc. But as I'd started out in the eighties with BASIC, Pascal, Delphi, VB etc. writing stuff that ran on these "personal computers" I tended to think more about writing UIs that ran where the user actually is. Although I dabbled with various server techs, my real interest was on the front end, with Applets, ActiveX, DHTML, WebClients ... all ultimately flops of course. Then Flex RIAs, which ruled for a couple of years but were killed by mobile.

I mainly work with Angular now, React didn't quite float my boat, functional approach without a proper functional language. Elm is much more interesting, but the work they're doing on pairing React with oCaml / Reason could change that.

In short, I don't have a problem with keeping up with the new stuff - I'm glad it's finally catching up with where I was years ago:)


I did a lot of UI client/server back in the day; VB, Powerbuilder, Delphi and I remember being able to bang out an app in days. Then the web came around and turned UI development into a chore. Recently though, I started playing with Elm and it makes UI development fun again.


I can vouch for Intercooler. We're rewriting large parts of our app (used to be a complex flux beast) and it is now way more maintainable and indeed around 1/10th the code. We now keep most of our app state within the server, instead of spread throughout client and server.

Of course, it is not end-all-be-all: it solves simple interface problems, those that should't require 200mb of JS dependencies to solve. Once the interface gets complex enough, you should use JS. We've still got a couple of JS components though.


Excellent. Intercooler is simple enough that if it goes awry I will just write my own, but if I don't have to then great.

But to further your point, intercooler is just a tool rather than an ideological shift in how we execute web applications.

The reason I see the whole front end JS infrastructure mess as unintuitive is precisely because my needs are served with back end code and a sprinkle of Ajax.

If I were building a complex SPA like an in browser photoshop, I might see more use in the complex ecosystem and try and tackle it. But, from a not-so-outside view it still looks like a mess.


Holy shit I can't tell you how happy that makes me to hear.


posts like this are why I love HN. Intercooler looks cool and simple, and simple enough I can just re-write it. Angular is big enough that my eyes glaze over thinking about reading into it.


History repeats itself. I'm also "old" and when I saw the first presentation on Angular many years ago, the google engineer started by instantiating the controller from the view. So blatantly wrong, yet noone reacted. I saw the same sh*t go down with JSP, and though to myself that I couldn't be bothered with going through the same thing twice. So I left the presentation and never looked back. That decision served me well, very happy that I'm off that bandwagon. Still love new tech though, so I'm not backwards in that sense.


> the google engineer started by instantiating the controller from the view

Sort of playing devil's advocate here, but what did you not like about that approach? I'm guessing he should have used dependency injection? Am I missing something else?


The controller is supposed to control the view, thats why it's called controller. The controller sends data to the view for presentation and reacts to events from the view and updates the model accordingly. This way the view becomes quite passive, it only knows how to present data and where to send events. You separate the display from the driving of the display.


Speaking generally (ie: not about Angular), your main app should instantiate the controller, view and model and (constructor) inject model and view into your controller.


I think you misunderstand me. I just wanted to see how it worked... intercooler is simple enought to dig into. Angular is 4 layers+ of abstraction deep now. I'd only dig into angular's code for work. I can dig into intercooler for fun. Then very likely just write it myself.

But smaller code like this that is well factored is great for learning and picking up new tricks and style.


This is the first time I've heard of it and it's elegant enough that I want to start working with it.

Here's the flipside. This is way easier for me to grok than Angular but which one has way more of a presence at companies and job reqs out there? I'd probably get screened out of a lot of resume filters.


I'm a pretty senior backend guy so I'm not worried about my resume. This helps me whip up quick tooling for ops and to understand discussions and nuances from the JS devs I work with.


Thank you for mentioning intercooler.js. After skimming through the docs for five minutes I am sure I will use this regularly from now on.


After skimming through the docs, my mind went all the way back to ASP.NET Web Forms days :) This model, unsurprisingly, brings it's own problems when things get complex.


I don't see the obfuscation of the client and server separation .Net Web Forms have.

It is declarative, just like the .Net, but I don't think it will bring the same kind of problems. What will probably happen is that when things get complex, you'll have to manually write the complex parts - that's a very sensible "problem" in my opinion.


Bingo.


Yeah, there definitely isn't a silver bullet. However, intercooler stays a bit closer to the metal than the old ASP.NET stuff, so it doesn't have the opacity problems that came with it.

You can think of it as a declarative syntax and a few extras layered on top of the standard AJAX function call.


I didn't know intercooler existed, so thanks for the link. For anyone else, it's worth reading about, it actually has a very well written introduction, guide and examples.


Yep, the docs are very good, and I've been enjoying catching up on the blog archives, too, for interesting project background and philosophy.


Great to hear!


Yes! As a still-young-ish guy who's just old enough to remember when things were slightly saner, please let me encourage you to get out there and make it happen. Don't let the Javascript fatigue apologists make you feel like you're too old to have an opinion.

So Intercooler is one of the sane ones, eh? Apply to talk at a JS conference and tell us about it. If it's somewhere near me, I'll do my best to be there.

The only reason why things have gotten so out of hand is because the people without the experience and wisdom are doing all the talking.


I've applied to so many JS conferences it makes me ill to think about it. The only slot I found (outside of my local Sacramento groups) was at UberConf in Denver, and the turnout was small.

Turns out the "You are all doing it completely wrong, stop writing so much Javascript" is a tough sell at Javascript conferences. ;)


Hey, Rich Hickey got to tell a bunch of Rubyists at RailsConf to stop using objects.


Ruby folks are more open minded than JavaScript folks.

_ducks_


+1 for intercooler.js

So easy and quick to get going with it.


It is plain and simple, Kids. I'm 52 - been programming professionally since the 70's when I started writing C code and getting paid for it in 5th grade. Our "professional" is writing glue code, and how it is done and what hoops are jumped through simply do not matter: all that matters is the final shipping product, widget, or logical dodad works for the immediate marketing moment.

I speak from enviable experience: game studio owner at 17, member of original 3D graphics research community during 80's, operating system team for 3DO and original PlayStation, team or lead on 36+ entertainment software titles (games), digital artist, developer & analyst for 9 VFX heavy major release feature films, developer/owner of the neural net driven 3D Avatar Store, and currently working in machine intelligence and facial recognition.

Our profession is purposefully amateur night every single day, as practically no one does their computational homework to know the landscape of computational solutions to whatever they are trying to solve today. Calling us "computer scientists" is a painful joke. "Code Monkeys" is much more accurate. The profession is building stuff, and that stuff is disposable crap 99% of the time. That does not make it any less valuable, but it does render it quite ridiculous the mental attitude of 90% of our profession.

Drop the attitude, write code freely knowing that it is disposable crap, and write tons of it. You'll get lazy and before you know it, you'll have boiled down whatever logic you do into a nice compact swiss army knife.

And the best part? Becuause you'd stepped off the hype train, you'll have more confidence and you'll land that job anyway. If they insist or require you to learn and know some new framework: so what? you're getting paid to do the same simply crap over again, just more slowly with their required dodad. Get paid. Go home and do what you enjoy. This is all a huge joke anyway.


I can identify with this sentiment completely and fully, as someone with close to 15 years in this industry can be.

It's all just the same as Mary in accounting making spreadsheets, or your boss making powerpoint decks. The fact that our product requires compilers and runtimes and virtual machines is just a footnote. It's just as disposable, most of the time. Why not enjoy it and get rid of the rampant NIH/New-and-Young-Is-Best attitude?

Just build cool shit, and ship it.


> "Get paid. Go home and do what you enjoy. This is all a huge joke anyway."

Would look great on your employer's (if you have any) "Our team testimonials" page :)


You're totally right but then people are faced with the possibility of an increasing number of code monkeys graduating from college every year.

Then the question becomes, how do you keep your job year after year when the number of code monkeys just keep on increasing. Some of them are shitty but a lot aren't.


it seems programmers are one of the most in-demand type of worker right now. I understand the fear of becoming pressured out of a job... but what about every other sector in the economy?

I think before our profession's sector becomes threatened, a whole lot of other sectors are going to blow up in a mushroom cloud of automation/obsoleteness (caused perhaps by us!) and force the economy to rethink jobs/food/shelter/basic necessities for everyone.


Baby boom's done and gone sir, now everyone's up to changing the world with an ad supported smartphone app or some grammes of silicone.


Do you think it's about this; and the "scalability" of the software stuff confuses one part of the manufacturing (copypaste by distribution through people installing from internet) with another part of it (copypaste by programmers copypasting copypasta code to "manufacture" more apps)?

It seems like you have introduced a new notion of manufacturing as part of coding which I like, and is a way to make yourself disciplined; while at the same time given that coding has potential for "research" by individuals with not much means the manufacturing process can be itself improved greatly (new libraries or techniques or things like when WWW happened but needs externalities like ARPA hardware prior work), and that's what I think makes it exciting. We are basically teaching machines to understand language that is intuitive to us so we can make it do things people want, and new things people want that weren't possible earlier or were only done by people earlier can often only be gotten to by Kuhn-esque revolutions in the philosophy. We started with code, and either more people code or computers understand people more, or both and we meet in the middle.

To go with the trend; this feeling of "nothing new" in CS and engr-using-CS is nothing new, it's how science and engr-using-science has seemed to work if you look past the programmable machines. [1] "normal science" cycles with paradigm shifts have happened in all sorts of engineering-influencing research processes (often called "science," at least part of it). Its just that in CS the period-length of revolution-normal things is quick because of, among many things, increased empowerment of individuals. And to the extent that this empowers individuals that can afford programmable devices and learn somehow (see many kids in many parts of the world that don't have much means) or having their friends teach in social settings if unempowered (visit that rich friend's house play video games and accidentally learn from them to code), that is a normative-ethics motivation to "contribute to society," and often even empathy-ethics works out because kids coding can remind you of you coding as a kid or something.

The hope is that kids socialize across and often in spite of traditional barriers and spread the knowledge / empowerment; and while teaching them (fact-leaning) we learn from them about our own personal biases (which we can still teach but why not program-in perspectivism). In the end some may have personal ethics or motivation structures that cause them to way that sort of thing vs. personal allegience to a status quo, or through chance encounters happen to do it more.

Sorry about the philosophy rambling, I just sort of find it fun and similar to coding.

[1] https://en.wikipedia.org/wiki/Normal_science


I started programming when I was 7, I'm 45 next month :)

The one thing in the programming world that is almost 100% applicable to almost every article like this ( and many other topics ) is..... it depends.

I'm fortunate in that for most all my career I have spanned many technologies from embedded systems to the latest crazes on the web. Mostly what becomes redundant is language syntax and framework. If your programming career is largely centered around these then you become redundant pretty quick (or super valuable when critical systems are built with them then need maintenance forever ).

Frameworks come and go so if you spend a lot of time creating solutions that shuffle data from a DB to a screen then shuffle data back into a DB.... then a majority of your programming skills will become redundant relatively quickly. ( maybe a half life of 4 years? ). But often when you are doing this, the real skill is translating what people want into software solutions, which is a timeless skill that has to be built over a number of projects.

If you work in highly algorithmic areas, then not a lot of your skills become redundant. Though you may find libraries evolve that solve problems that you had to painfully do from scratch. However that deep knowledge is important.

Design, the more complex a system is to engineer (that isn't provided to you via a framework), the more likely you will have skills that won't go redundant. Design knowledge is semi timeless. My books on cgi programming circa the mid nineties are next to useless, but my GOF Design Patterns book is still full of knowledge that anyone should still know. OOSC by Betrand Meyer is still full of relevant good ideas. My books on functional programming from the 80s are great. The Actor model which has its history in the 70s is getting appreciated by the cool kids using elixir/erlang

Skills in debugging are often timeless, not sure there's any technique I'd not use anymore. ( though putting logic probes on all the data and address lines of a CPU to find that the CPU has a bug in it's interrupt handling is not often needed now )


spend a lot of time creating solutions that shuffle data from a DB to a screen then shuffle data back into a DB.... then a majority of your programming skills will become redundant relatively quickly

This is kind of astonishing, isn't it? When this kind of "CRUD" data bureaucracy has been going on for decades. There's no fully general solution yet? We're doomed to keep reinventing it regularly?

Debugging is really one of the core skills of programming that should be explicitly taught.


The tech industry has certain weird persistent prejudices. One of them is a prejudicial attitude to "CRUD" despite its importance and relative complexity compared to how it is perceived. Another is a fetish for unnecessary code optimization and scalability.

The obsession for newness is another one, obviously.


> There's no fully general solution yet?

The power of software is that you can create specific solutions. There are plenty of fully general solutions out there but they're not tuned to exact needs of the user. For that you need programmers and programming.

Things have actually gotten a lot better but, as the technology improves, so do the demands of users. I do a lot of CRUD but it's all highly specialized. An hour of my time coding a very customized way of shuffling data from database to the UI will save hundreds of people hundreds of hours.


Good point about "super valuable" maintenance skills. Some well paid COBOL programmers still, I hear.


Hm. The author works for a web/mobile development agency and uses React Native and GWT as examples of the new and the old, respectively. I hope it isn't news to anybody here that this sort of work is a race to the bottom and has such turnover precisely because it's mostly being done by junior developers. Linux systems programming arcana, for instance, doesn't disintegrate so quickly as the ten years the author cites. That's why, after getting into the industry as a frontend web dev, I will only do that sort of work now as a last resort to pay the bills (the other reason is because it's easy/boring as hell, apart from the greater opportunity for mentoring). Doing that sort of work now feels like I am sabotaging my career.


Yes. I'd love to hear some people's journeys to advance their career and move from web development into more specialization. Off the top of my head, there are three types of specialization, for money, for domain knowledge and for technical skills.

For Money: Not necessarily any more satisfying or churn-proof than web development; But definitely more lucrative. Salesforce/SAP/Oracle consultant, mobile app developer, SEO or becoming a web dev consultant or remote work arbitrage or just becoming a manager.

For Domain Knowledge, you are becoming a computational {Biologist, Geologist, Financial Quant} than a pure coder. Fields like modeling fluid dynamics or finite element analysis for GE or GM; or modeling petroleum reserve for Shell; or Bioinformatics for BigPharma or modeling risk for banks. You probably have to go back to school for another degree in that subject or get the equivalent on-the-job training.

For Technical Skills, you are specializing in a sub-field of practical coding/IT, working at a software company. Like Data Science/Machine-Learning (distributed computing tools and knowledge about basic learning models), or Information Security (e.g., MITRE) or Embedded System (C++/Assembly) or high-performance system (e.g., Akamai; C++/networking).

Curious anyone who took the step to specialize or just started out in a more specialized field and their thoughts; and if I am missing any potential specialization options out!


I started dabbling in compilers early on and just got increasingly into them and their associated tools. It's still fun watching a new optimization rumble into life:

https://github.com/dlang/dmd/pull/6176

(Yes I know some other compilers do that already.)

It's fun knowing how it all works from the source code to the executable.


Well, I learned to code in order to do bioinformatics. I didn't start out as a programmer looking to specialize.

I agree that it seems that programming + X is often a much more powerful combination than programming or X alone. But I would say to anyone starting out that it is better to head towards, and get the formal qualifications for X, and learn programming on the side (or if in college, get the major in X and a minor in CS).

The reasons are that programming is relatively easy to learn outside of classes, and programming itself really doesn't require formal qualifications.

For me, biology has been orders of magnitude harder to learn than coding. With coding, you can learn by doing, and you get immediate feedback from the compiler/interpreter. Not so with biology. You can be mistaken for weeks/months/years and never realize it until you read just the right article. And often, domain-specific knowledge like biology seems to be composed of thousands of tiny details without too many general principles, whereas if you learn a few basic principles for programming, you can learn pretty much any new language or framework easily.

I don't know if this is generally true of all specializations, but if so, it would behoove someone to get started on learning the specialization-related information ASAP, because that will take much longer to reach proficiency than for programming.


Thanks xaa for your thoughtful comments. I've taken a few MOOC on Bioinformatics. Biology certain seems much harder to get feedback esp. if you work in the wet lab, but I'm curious however about what particulars in Bioinformatics in your opinion is harder to learn (assuming that a computer person has taken Biology 101, aware of DNA to RNA, transcription, translation, mutations, variants, alleles, genotypes, haplotypes, basic cell biology; and maybe let's say even a step up, familiar with basic's of processing NGS files, processing assembly, variant calling, RNA-Seq and ChIPSeq differential analysis).

Is it the statistics (finding the right statistics inference test and knowing what are the pitfalls of your models)? Or is it understanding the underpinnings of the biology behind a particular pathway you're studying? (e.g., knowing how to perform or iterate on the wet lab experiments of doing in-vivo or in-vitro experiments). Thanks again for your comments.


> is it understanding the underpinnings of the biology behind a particular pathway you're studying

Yes, this is it. You are right that the really general basic concepts (what is DNA? how does transcription work?) are not all that hard to learn. And it is certainly part of the core job skills to know, e.g., how to process sequencing data and how to use statistics. But in the real world, bioinformaticians work in collaboration with wet-lab biologists.

So, a typical project for me might look like this: collaborator comes in and tells me that his lab studies a particular protein X that operates at the presynaptic terminal in neurons. And they are collecting data about how some perturbation to X affects other cellular systems. The data will often be some combination of sequencing/array data and more specific wet-lab experiments (western blots, electrophysiology, etc).

So, in that case, to really do my job well, I have to go back and read in depth about the presynaptic interface, the major proteins and the mechanisms that work there. I may already know the generalities, but to really be able to interpret the data correctly, I need to know the details.

Now, imagine doing this process 10-20X a year for different collaborations and totally different biological systems. It's very hard to keep up. Now, it is certainly possible to be the kind of bioinformatician who is "give me your sequencing data and I'll give you the DE genes back", treating everything as a purely technical problem. But these kinds of bioinformaticians are not as much sought after because the wet-lab biologist doesn't want Excel spreadsheets full of lists, they really want to know "what do my results mean?". And to answer that really requires both the bioinformatician and the wet-lab biologist to understand what the other is doing at a more than superficial level.

In short, the hard part isn't learning the things that are used in every project, the hard part is learning the domain-specific information that is relevant to each individual project.


Ill add people doing COBOL, C, and C++ to that list. Rate of change is slower. People are more conservative about what they're doing. Reuse of proven components prefered. Knowledge doesnt have to get worthless so fast if it was worth something in the first place.


> Doing that sort of work now feels like I am sabotaging my career.

Heh, I feel that way and I'm 22. Unfortunately the only language I know is JavaScript and the stuff I am interested in and learning isn't quite there yet in terms of jobs.


Twentysomething year old here, so you know, not like you don't have a point but:

>the other reason is because it's easy/boring as hell

I legitimately think front end development is not only very fun, it can have some really challenging aspects. I wouldn't think there's any programming challenge that is inherently easier because it's on web as opposed to something else.

Of course I bet there are domain-specific tasks (Distributed Programming, Embedded hardware to list some) that are likely harder than web development. But I guess what I'm trying to say is: I don't appreciate you calling what I make a living off, and spend quite a few hours studying weekly 'easy/boring as hell'.


35 here. I completely understand why you are on defensive about something you are passionate about. Try to see our perspective though; web dev, we've been there. In my case, you name it, I've done it, nobody gave me a T-shirt thought they all asked for unpaid overtime.

At some point you realize that it's a race to write more lines of code with every iteration, and that employers will gladly abuse your passion in many ways go get cheaper output from you.

If I may take the liberty to give you an advice here, go for the lower level/more specialized stuff as your career progresses. Not only they change at a slower pace but they are paid way 2-4 times more here in Toronto. If you care that you studied and make a living off of it, try to focus after technologies where your hard work will be relevant longer.

As for me, I love what I'm doing. Currently it's a mashup of crosplatform c++14, .net core rest api, elastic search, postgre and bunch of connecting glue. Our resident js expert is busy for the next few weeks so I'll probably learn typescript and go up my stack to provide an interface.

Smaller shops are tons of fun.


> go for the lower level/more specialized stuff

This is my own strategy, though domain specialised rather than low-level.

I don't want to do "business" or project manage.


I was only speaking for myself. The challenges got old and most of the work felt like framework jockeying; it was work I could do in my sleep apart from the sweat of learning new tools. I won't deny that there are some exquisite challenges in frontend development, but most of the paying work does not involve them and is pretty CRUD-oriented. Also I was referring to web development in general, not just frontend.


I can't get over the endless yak-shaving in front-end dev. Fighting with browsers that all do things slightly differently, a terrible programming language, an abused, crufty markup language and a baroque styling language on top of that is not my kind of idea of a good time. Not to even get into the node/bower/grunt/gulp/webpack/babel shitshow compile and build process that it seems to be trending towards.

Working on the backend building APIs, optimizing data flows and managing backend services is more fun, and wildly more productive, if less whizz-bang and shiny.

Speaking as a 20-something who frequently forgets he's not a forty-something...


> endless yak-shaving

I think this is the issue, it's not that the problems aren't hard, rather you aren't solving new problems, but trying to figure out how the existing tangle of software can be made to do what you want...


Well, sorry, but... it is. Maybe you find it interesting, that's subjective, but there's no real technical challenge in front end stuff. You're not solving hard engineering problems; you're pasting together libraries other people wrote on top of libraries other people wrote (and on it goes) and searching google to figure out why your opaque stack doesn't seem to be working.

Developing a good UI is difficult, no question about it, but not for technical reasons. Whether you resent that or not doesn't make it any less true.

>I wouldn't think there's any programming challenge that is inherently easier because it's on web as opposed to something else.

Not because it's on the web, because front end work doesn't require anything more than knowledge of your toolset and some design sense.

It's nearly all "hey, build a UI with some CRUD functionality which is essentially the same as the 100 you've built before, but for this special snowflake customer." Bleh.


Honestly when I see what my friends who design airplanes and particle accelerators are doing, I feel like we're all kind of fucking around in software engineering, frontend or not.



Heh, true. Compared to real engineers and scientists we're just a bunch of monkeys re-inventign the wheel every 10 years (with each iteration being worse than the prior one).

But from time to time one of us monkeys gets lucky and changes the world.

So I guess it's a risk/reward thing :)


I agree, most are.


"but there's no real technical challenge in front end stuff."

oh boy, you are so wrong! as an old fart having spent a number of years building UIs, can tell you, this is hard! information layout, controls, flow - it can be tangled into a total CF, or it can be seamless. you are not pasting libs on top of libs - that is the job of a monkey, front or back end alike. normal devs start their day with talking to end users and listening to their pain. then they spend rest of the day trying to alleviate it. monkeys spend their days thinking how they can add to user's pain by pasting more layers of crap.

engineering is an important part in back end, front end or in cleaning the toilets. it is not what you do, it is how you do it.


You're describing a task which is more frustrating than technically challenging. Nothing you described requires a high degree of intelligence or training, only time. Of course some are better than others at it, but that's true of anything.

Think; designing the CV systems for self driving cars (since that's a hot topic at the moment) vs designing a UI which works seamlessly across browsers. One is a real, honest to God engineering challenge; the other is an exercise is patience and perseverance. There are many more people in this world who could pull off the latter than could the former.

If you truly, honestly believe that UI dev work is technically difficult when measured against the rest of the engineering world then I can't imagine you've been solving hard problems throughout your career. UI layout is not "hard" and it's certainly not engineering, nor is 'cleaning toilets'. of course creating UI's comes with its own set of challenges, but I'm specifically talking about technical difficulty.


You can jump off the train and do bad UI at any moment. But UI always, always, gets hard when you start caring. That's not a measure of whether UI is harder than CV, it's a low-pass filter on what the market will value.

Most of the individuals making CV demos are not making unfathomable genius-level demos: they are taking an existing library and bolting on a simple application to it. And most such demos are flimsy and not commercially viable, because the technology needs more massaging at a deeper level than the author is capable of in order to fulfill the desired marketing promise.

Meanwhile, simple bolt-on UI tends to be good enough. Not because it's fine as-is, but because the user can accommodate for its brokenness without being able to express what exactly is wrong with it. The product can ship and create value, albeit not as much. Yet, a UI dev who wants the best possible experience has to have the same attention to detail up and down the stack as the CV dev who wants their algorithm to perform well along every metric. UI innovations are understated because they oft seem obvious or unsurprising in retrospect, but they do come along every so often, and often in tandem with the developments elsewhere - these days, AI algorithms are increasingly intertwined with the interface in a very direct fashion, when one considers voice processing, predictive text, or other such features.

And yet, if you should aim to achieve better UI, some naysayer will come along and proclaim that you have made something "overengineered" and should "just use a standard toolkit like the rest of us."


by UI design I do not mean cross browser compatibility, but rather an understanding of the underlying data and operations on it, which involves familiarity with the business domain. this is hard. putting stuff on the screen is not hard. I am OK with tables and gifs. with all due respect your contempt for people who build UI (and clean the toilets) and misunderstanding or general principles of engineering are sad (pardon my microagression).


> putting stuff on the screen is not hard

> your contempt for people who build UI

You've carved out your own definition of Frontend dev, it doesn't always require business/domain knowledge, many just put stuff on the screen.

Incidentally, I found this hard - not in a math problem way, but in a costing a lot of time and patience way. I distinctly recall spending a couple of hours trying to align some div after being requested to do so. I strongly dislike CSS.


Oh don't even get me started on front end development. A few times I've actually had the opportunity to make some really brilliant JavaScript optimizations only to have any performance gain made totally irrelevant by business loading up the page with a ton of totally non-performant ads.


This hits home way too hard.


> You're not solving hard engineering problems

What, in your mind, makes an engineering problem hard? I've certainly had to dig out CS algorithms and 'clever' applications thereof to reach desirable performance out of some custom widgets in front end projects. I don't know if that can considered hard – hindsight tends to make everything seem easy – but it is certainly beyond pasting in a library like you describe.

I have to agree that you can create some very useful projects that do not deviate from a framework/library's documentation, or what have you. But we don't really know what the parent in particular is working on.


>What, in your mind, makes an engineering problem hard?

A problem which requires a high degree of creativity, intelligence, and technical ability, likely one which hasn't been solved before. You're right; it's a bit difficult (at least for me) to define, but we know it when we see it. Sending men to the moon was a hard engineering problem; implementing the UI for gmail was not.

You speak of using 'CS algorithms' in your UI's. I assume you're talking about things like optimizing a search of a list by using a better data structure or sorting algorithm. C'mon. You didn't solve these problems, other people did, you just did a little research, and this is basic stuff most of us can pull off fairly easily.

The vast majority of web development is not engineering. It is in fact just pasting together code other people wrote to solve a particular, relatively trivial problem.


Lol, you think you are spending 40 hours a week at the forefront of problem solving new challenges?

Let's be real, for a moment. By your definition of a hard engineering problem, very few people are spending their time doing it. Virtually no one is doing it every day.

Why pick on front end dev specifically? You think the average backend dev building installing flask is facing a lot of unsolved problems? You think the average game dev is rendering 3D models in some new, magical way?


So, I didn't want to say what I do because I didn't want to make it personal, but yes, I do. I work in biotech and desogn control systems for an immunofluorescent imaging device and associated computer vision algorithms used to identify circulating tumor cells in the blood stream.

We just released a prognostic test for colon cancer which is actually relatively groundbreaking as it provides a new treatment path for people who previously had no options.

So yes, I feel that I am. I have been a part of small engineering teams delivering brand new technologies to the market my entire career.

And you're right, very few people, especially in software, are solving hard problems. I wasn't picking on anyone, I was responding to a comment.


So you are using a fluorescent fiberscope to identify tumors.

So you are implementing some vision library. How is that a hard engineering problem?

> So yes, I feel that I am.

That's the crux of the issue. You are doing exactly what you find so distasteful about web development: implementing existing algorithms to solve a specific problem.

Unless of course, your team is actually inventing new vision algorithms that somehow changed the field, which I'd love to see the paper on.


Stringing together some OpenCV functions isn't hard engineering.


> You didn't solve these problems, other people did

that's a high bar for engineering. Integrating known solutions into a situation is engineers. Mechanical engineers don't discover the laws of motion themselves.


So you're right about that, and I thought the same right after I posted. There is of course a scale here. Taking academic scientific research and turning it into something real is a far cry for changing e.g. an array to a hash table to improve lookup speed.


> I assume you're talking about things like optimizing a search of a list by using a better data structure or sorting algorithm.

You assume wrong. They were not straight up textbook problems, but I agree that I used a wealth of existing knowledge to apply it to my specific problems. That's what engineering is.

> You didn't solve these problems, other people did

Naturally. If you are working on finding novel discoveries the world has never seen before, you would be appropriately labeled a scientist working on science. Hence, also why we consider the algorithms I mentioned before to come from computer science, not software engineering.


You are comparing the sexiest problems of one field with the ordinary problems of another. This is just unfair comparison.

By saying that "we know it when we see it", you are acknowledging that this is a value judgment (nothing wrong with that, just don't present it as an absolute truth). Also, by your definition, sending humans to the moon is not a hard engineering problem. It has been done before; it involves pasting together some relatively well-understood components like rocket engines and guidance systems.


I think you have confused engineering for research. Engineering is not anything like you have described. I went to school for aerospace engineering.


Ok, then backend folks just do API integrations and database guys just make tables, no real technical challenges anywhere. Anybody getting stuff done is pasting together libraries.


>Ok, then backend folks just do API integrations and database guys just make tables, no real technical challenges anywhere. Anybody getting stuff done is pasting together libraries.

The original poster said nothing about frontend vs. backend. Still, you're absolutely correct, backend work is mostly gluing together pre-built libraries and software too.


> there's no real technical challenge in front end stuff

You're looking at this backwards. It isn't about the task. It's about the person who approaches the task. Someone "creative, intelligent and technically able" (to quote your later comment) will bring vision to any task.

Like water they will find their level, find the new and shining thing they can bring to the arena in which they find themselves.

These invidious distinctions contribute nothing to mutual understanding amongst computing professionals. All of whom need to work with each other to achieve great things. That includes the documentation people - shout out to them for their intelligence and creativity too.


There's engineering challenges in frontend but they only apply to Facebook-tier scale. There's also cooler stuff like data visualization, advanced animations/design and WebGL but those seem to be done by designers. For most of us, it truly is gluing X framework to Y backend over and over again.


Sure, but even then I wonder how much of that complexity (the scaling bit) is actually implemented by the front end devs. I would assume that it's handled on the back end, but of course a UI like FB, used by 1.7 billion people, can't be an easy thing to design.


At least at Google, "front end dev" extended down to the webserver tier, written in C++/Java. You were expected to make any changes necessary inside the webserver to complete your feature. That's part of what made hiring so hard: the number of devs who know Javascript and C++ and have their algorithms down cold is relatively low.

I imagine it's fairly similar at Facebook.


Of course, that's not the definition most 'front end devs' fall under.


"[T]here's no real technical challenge in front end stuff"

I have the distinct sense that you might not have done much of this.

There might not be many esoteric algorithms to implement, but there is plenty - more than plenty - to do in terms of managing UI state and back end integration, especially if you have stakeholders who want to see a variety of complex UI manipulations happening in response to various business rules. Which can often be internally self-contradictory. In a DOM that delights in being uncooperative, pretty much . . . always.

Frankly, it's enough to make using Java or a C variant for typical back end work look like a walk in the park.


What option do people have?

We created a universal interface for computing, and it is web programming. It does not matter if you are creating a CRUD application of an improved Google search, you'll have to present it the same way. Thus, unless you are willing to give up control on the presentation of your work, you'd better learn it.


When it's easy / boring as hell why not make the computer do it?


Yes, lets just make a framework to put an end to... well... all that mess of frameworks... ugh... :)


There is a lot of tasks with enough variation to resist being automated (or at least they require much higher skill to pull it off) but they are boring as hell after you do a few instances.


What do you duo instead? I want to get into embeded/systems development or marine (ship systems) development -- but the vast majority of that work seems to be DoD and/or require more than Public Trust clearance which I refuse to get.

Web Dev is just so easy to get, and pays well, at the moment the temptation is hard to ignore.


Nothing as cool as embedded development, database development or anything like that. Just general backend positions, working on the guts of something that may have a web layer, mobile layer, etc. Web jobs are plentiful and paid fairly well early in my career, but pay on the upper end has a pretty hard cap, so it's been a worthwhile move as there's more opportunity for career growth. It also better lends itself to specialization (ML, natural language processing, etc.) so that is another opportunity.


But one of the downsides of doing Linux systems programming is that it is much more niche and really restricts your options for employers as well as where to live.


Is that meaningful? Even if they're hiring, it's not like I'm going to get a good senior web dev job (pay, challenges, respect, oppportunity..) in Boise or Nashville or wherever. Yes there are jobs, but unless you're going to work remotely, almost all of the good ones are clustered in a handful of metro areas anyway.


Containers is an exploding field, and basically "Linux systems programming". Sure it's more niche than the web, but there are many many companies hiring, all over the world.


Maybe in 5-10 years I'll consider taking a look at containers. Currently, they are a fad not much unlike React.


Hmm, why do you think they're a fad?

Virtualization as a way of controlling specific application environments has been around for decades. The current movement (kicked off by the hypervisor work over a decade ago) is towards running more virtualized environments now that hardware and software support is much better (reduced overhead associated with it, greater portability).

Do you mean that they're overused or that their value is overstated?


When Fortune 500s start adopting a technology, it can be considered ripe and stable. Big companies move slow for a reason. You wouldn't want your banking site to pull the newest javascript magick, but something to do dull-looking business logic, and be rock solid. You brought up virtualization. That started in the '60s. It too 40 years for the technology to get to a level where it is widespread and adopted by everyone and their cat.

We will see if Docker and other container technologies will get their hold in industries other than startups. For now, all the people who use it that I see are the cool kids on the block in their hoodies. I think boring technology is good technology in that it provides value and stability for everyone. I am yet to hear of one Docker adoption where at scale the company saved more than a three digit sum year-on-year. Remember that you also pay an extra for sysops folks who run these things, and they will ask a higher price because the tech is hip!

For me, I'll stick to VmWare and kvm for now.


>as well as where to live

Those very same companies offer remote work increasingly, as they struggle to find talent in the aforementioned field.


If somebody had found himself in Edinburgh in 1986 and bumped into a tall gentleman called Robin, who was a bit familiar with this new-fangled thing called computers, and had asked Robin, what kind of programming language should one learn to use these computer thingies, what would Robin have said? Not sure, but maybe something along the lines of "well ... there are many interesting languages, and different languages are suitable for different purposes. But if you are interested, I'm dabbling in programming language design myself. Together with my students I've been developing a language that we call ML, maybe you find it interesting. With my young colleagues Mads and Robert, I'm writting a little book on ML, do you want to have a look at the draft?"

Maybe such a person would have chosen to learn ML as first programming language. If this person had then gone on to work in programming for 3 decades, and if you'd asked this person 30 years later, i.e. today, what's new in programming languages since ML, what might have been his answer?

Maybe something along the lines of: "To a good first approximation, there are three core novelties in mainstream sequential languages that are not in ML:

- Higher kinded types (Scala, Haskell).

- Monadic control of effects (Haskell).

- Affine types for unique ownership (Rust).

Could I be that somebody?


Haskell isn't quite “mainstream”, so I'm taking the liberty to add innovations from other “not quite mainstream” languages:

- Hygienic macros as a scalable tool for extending and redefining languages, and furthermore, making the extensions interoperable with each other (Racket).

- Language support for building reliable massively distributed systems in spite of individual node failures (Erlang).

- General-purpose programming with growable arrays, hash tables and no other data structures (okay, these ones are very mainstream).


Good points.

ML originally had Lisp-like macros, not sure about hygiene. Note also that one doesn't always want hygiene in meta-programming, although it is nice to have the option of hygienic macro expansion.

I explicitly restricted the comparison to languages for sequential computing. There has been a lot of novely in concurrent programming.

Arrays and hash tables are data-structures that you can implement as libraries in ML, so I'd say that's not a language issue. Progress in data structures and algorithms has been considerable.


> I explicitly restricted the comparison to languages for sequential computing. There has been a lot of novely in concurrent programming.

Oops, yes, my bad!

> Arrays and hash tables are data-structures that you can implement as libraries in ML, so I'd say that's not a language issue.

Yes, but the point is that nowadays we have languages in which it's “convenient” to design entire large applications around nothing but arrays and data structures. Also, that one was snark.

> Progress in data structures and algorithms has been considerable.

In CS, yes. In everyday programming, regress in data structures and algorithms has also been considerable.


   regress [...]  has also been considerable.
Thanks to Moore's law, most programmers even get away with it. And if they don't ... they do big data.


Oops, arrays and hash tables.


This statement is false

> Half of what a programmer knows will be useless in 10 years.

and the rest of the article seems to be based on it so it negates much of what is said.

Foundational knowledge does not decay. Knowing how to estimate the scalability of a given design never gets old. Knowing fundamental concurrency concepts never gets old. Knowing the fundamentals of logic programming and how backtracking works never gets old. Knowing how to set up an allocation problem as a mixed-integer program never gets old.

In short, there are many things that never get old. What does get old is the latest fad and trend. So ignore the fads and trends and learn the fundamentals.


Foundational knowledge does not decay.

True in a certain intellectual sense. And in practical senses, also.

But unfortunately, the hiring market doesn't filter for "foundational knowledge"; in realistic terms, it largely hires for what might best be termed "keyword compliance" -- that is, baseline exposure to stuff that barely existed, or if it did, barely ticked most people's radar screens more than 5 years ago. And sometimes with even greatly shorter cutoffs than that. You know, stuff like Docker, React, Spark, etc.

That is, everyone says they're not out to hire dummies. But if you look at a lot of job ads -- a lot of people apparently just won't talk to you if you don't have a good number of "boxes" checked. Corresponding, in many cases, to stuff that wasn't even around 4 or more years ago. Very few (or so it appears) seem genuinely interested in hiring generalists with "strong foundational knowledge".

I know there are many counterexamples. But if you go by the stated proclamations companies make about who should bother sending a resume -- for a lot of them, it's not foundational knowledge that gets you to a screening call -- it's keyword soup.


Are you saying that there aren't any jobs using long-term established stable ecosystems in the hiring market? No J2EE CRUD apps, no standard LAMP websites, no .NET stack-based jobs?


I wouldn't say there aren't "any" such jobs. And there are a lot of languages I never look at ads for, so I can't say I have any sense of what their market is like. And of course there are whole domains (like C++) where the ecosystem is just intrinsically older.

But by and large (and this gets back to the original article), once your core "keyword set" has gotten long enough in the tooth (which I'm pegging at an age of 5 year or so), the number of ballpark-open offerings (as in: you have enough XYZ that they're consider talking to you) does seem to drop quite precipitously. And the more "skinny pants" your domain is (like web stuff), the more this rule seems to hold.

So the JS ecosystem would seem to be a bellweather example -- nearly every role seems to require either AngularJS or React (5 and 3 years out, respectively), or something similarly recent; pretty much no one is interested in jQuery or any of the older frameworks, these days (except, as the other commenter stated, for small maintenance projects).


Sure, if you like 3-month contracts.


In a different state than the one you live in.

No relocation assistance.


How exactly are you contradicting that sentence you're quoting? In fact, it seems to me you're confirming it. He didn't say that everything a programmer knows will be useless in 10 years, but that half will. You're only enumerating the half that won't.


This doesn't directly to the statement you're refuting, but he explicitly mentioned half-life.

The author does basically ignore the core knowledge aspects of a programmers career, and he bases his entire argument on the half-life of usefulness of detailed knowledge about specific technologies.


I agree with the grandparent, that the writer overstated how much in ten years will be useless. It was almost 20 years ago that I first learned HTML, and since then neither it nor hardly anything else I have learned has decayed: CSS, native JavaScript, jQuery, PHP, PostgreSQL, Apache, and Bash. But I guess it depends on what you try learning.


Things you have learned, but decayed: IE6 CSS fixes, Apache1 configuration file syntaxes, PostgreSQl getOrCreate surrogates, Jquery, ...


  > IE6 CSS fixes
I didn't really ever learn these. I stuck to simple layouts or used tables, which is fine (https://www.combatentropy.com/coming_back_to_the_table).

  > Apache1 configuration file syntaxes
I didn't learn Apache until version 2.

  > PostgreSQl getOrCreate surrogates
I never learned these. I don't know what they are.

  > Jquery
This hasn't decayed.


I think you're talking about the other half (which isn't useless).


> Foundational knowledge does not decay. Knowing how to estimate the scalability of a given design never gets old. Knowing fundamental concurrency concepts never gets old. Knowing the fundamentals of logic programming and how backtracking works never gets old. Knowing how to set up an allocation problem as a mixed-integer program never gets old.

Tell that to all the people hiring programmers, they seem to be unclear.


While I agree with the rest, there is one nit I must pick regarding:

>This statement is false: "Half of what a programmer knows will be useless in 10 years."

The idea of a half-life of working knowledge has been around for decades; see https://en.wikipedia.org/wiki/Half-life_of_knowledge


Apparently old is a matter of perspective ... To me not quite 40 is still a young'in.

I'm over fifty and just got back from presenting at a major conference. I've managed to say current through 35 years of embedded systems design (hardware and software) as well as a stretch of software-only business. It's really not that hard if you understand your job is to continually be learning. I must be doing it right because often those I'm teaching are half my age.

As an aside, I've done the management track and moved back to the technical track when I found it unfulfilling.


People over 30 feel 'old' every 10 years. That's nothing new; it's not even 'programmers are 20something'; people becoming 30-40-50-60 have all been saying 'now I am old' while we stand to become 90-100 (at least in western EU), so 60 is not that old. 40 (i'm 41) is spring chicken and I look forward to many years telling my younger colleagues that the latest thing, however interesting to learn about, is not always better.


I never said that I was old ... One good sign is that my wife keeps telling me to act my age!


Nobody has any experience of being any older than they currently are, but a lifetime's memory of being younger. Hence most people of all ages feel old.

One trick I like to play on myself is to imagine I come back from twenty years in the future. What advice would I give myself? First thing would have to be "shut up about being old! Your life is still ahead of you."

I like the other trick too, where I imagine being visited by a teenage me and thinking what he would say about where I'm at. It can be an awkward conversation. Where's the Ferrari?

As a less whacky version, pay really good attention to your parents and your kids.


I use that timetravel trick as well; I never thought I was old (I like getting older so far; so many doors just open that were closed before), but yeah things to tell your younger self; do not hurry so much. Make 10 year+ plans when doing things. I always hurried thinking something would end; I have been running companies since I was 15 and, for instance, the first company I co-ran with my uncle, made educational software for MS-DOS and later Windows 3.11 and then Win95 etcetc. I was in hurry because I thought first MS-DOS would go away and then that Windows apps would go away because web. The software I made then sells well still; it's now over 25 years old... Why did I hurry/worry?

Things I thought that would end, like the CMS market 16 years ago (a market my company thrived in) didn't end. They became bigger. If I would've not hurried, I would have less stress at the time and probably be running on a larger scale than that company is doing now. You cannot stand still and for some parts there needs to be a sense of urgency but it doesn't change that much in most markets. Currently I use that to tell my colleagues we need a 10-year plan, not just a 3-5 year plan.


You don't need to learn React or Angular or another framework. Spend your time getting really good in your preferred stack of choice. That could be a framework or something of your own creation. Do not go to work for a company that only wants to hire people familiar with a specific framework. It's a huge red flag. The work will be boring and the team mediocre. More often than not there will also be culture issues.

Great companies who have interesting projects will want to see what you've built in the past; the technology is just a tool. They will trust you to use the right tools for the job, and will respect you enough to let you pick those which you prefer.

For legacy systems, it's helpful to have some experience but it's not like you won't be able to be effective if you're good given sufficient ramp up time.

In my experience it's far better to hire the smart, motivated engineer who can actually get stuff done and has created high quality software before than someone who is an expert in a specific framework.

Also I avoid going to tech conferences about web stuff, unless it's a legitimately new technology. A new way to organize your code and conventions are not new technology, it's just some guy's opinionated way of doing things. And most of the talks are less about conveying useful information that will help you and more about the speaker's ego and vanity.


> Do not go to work for a company that only wants to hire people familiar with a specific framework.

So, filter out 99% of the jobs that are out there? (And 100% of the ones outside of San Francisco)?


One of the consequences of this wide-spread ageism is the amount of unnecessary, ill-conceived, and often dangerous wheel-reinvention that 20-something hipster programmers get away with.

Exhibit A would be NoSQL. Little more than a rehash of the hierarchical and network (graph/pointer) databases popular in the 1950s before the ascent of relational databases, these systems enjoy increasing popularity despite few, if any, advantages over relational databases besides allowing 20-something hipster programmers to avoid learning SQL and the ins-and-outs of a particular relational database (like PostgreSQL) and allowing VC-backed tech companies to avoid paying senior developers who already possess that knowledge what they're actually worth.

If these new data stores were at last as reliable as the older relational databases they are supplanting, it wouldn't be so bad. But they aren't. Virtually all of them have been shown to be much less reliable and much more prone to data loss with MongoDB, one of the trendiest, also being one of the worst[1].

And these systems aren't even really new. They only appear that way to young developers with no sense of history. IBM's IMS, for example, is now 50-years-old, yet it has every bit as much a right to the label "NoSQL" as MongoDB does--and amusingly, it's even categorized as such on Wikipedia.[2]

1) https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-read...

2) https://en.wikipedia.org/wiki/IBM_Information_Management_Sys...


To me, it seems a bit like JSPs of 15 years ago, with all the logic in the presentation code, but I'm "old", so I assume I just don't "get it".

No, you get it. It's the people who get excited about stuff we tried and abandoned two decades ago that don't get it.


So i'm young. What makes JSPs bad, why aren't you using them anymore and what are you using instead?


I think it's the "all the logic in the presentation code" that is emphasised as bad, and that is what you saw in typical JSP example code 15 years ago.

"Mainstream Java" then tried to sell EJBs as the answer, which was another world of pain.


Our (relatively young) "Angular" UI guy didn't know what a JSP was. Never heard of it.


You know, if truth be told, we really haven't come very far. You'd probably be surprised at just how well a modern COBOL system can operate. (http://blog.hackerrank.com/the-inevitable-return-of-cobol/)

In fact, in many ways we've made things worse because not only does the sand keep shifting, there is now way too much sand. Young people come into the field and they want to make their mark. So we are constantly going through "next big thing" phases, some big like OOP, some smaller like React, only later to realize that what seemed so very interesting was really a lot of navel gazing and didn't really mater that much. It was just a choice, among many.

I can only hope one day some breakthrough in C.S. will get us past this "Cambrian Explosion" period and things will finally start to settle down. But I am not holding my breath. Instead I am learning Forth ;)


> I can only hope one day some breakthrough in C.S. will get us past this "Cambrian Explosion" period and things will finally start to settle down.

I genuinely do feel like we're in the stone age in this industry right now. I've thought about it a lot, but of course, it's hard to really get to the good ideas when you can't hop on lot of stepping stones that will be found later and taken for granted.

I think a few things will happen. 100 or 200 years from now (if it's even appropriate to think on such a timescale!), we'll have some very large scale, stable data storage systems that people can simply rely on. A few common development paradigms will have thoroughly been cemented in our collective consciousness, and the programmers of the day will be essentially what construction workers are right now, perhaps with a bit more creativity. They'll follow plans and be put into rigid confines when programming with the system, and it'll be scalable from a development perspective.

I haven't got much further than that. It's hard to step more than a few layers deep on stuff like this. A lot of the rest of it depends on how things like AI and VR and a bunch of stuff I can barely imagine will pan out. But from a software point of view, I think we're still waiting on a bunch of 100-monkey-style revelations.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: