Hacker News new | past | comments | ask | show | jobs | submit login

I'm a bit older than the author. Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same, this time someone has added another C to MVC; or the put their spin on an API for doing X; or you can tell they didn't learn from the previous solution and this new one misses the mark, but it'll be three years before anyone notices (because those with experience probably aren't touching it yet, and those without experience will discover the shortcomings in time.)

When learning something new, I find that this group implemented with $NEW_THING in a completely different way than that group did an implementation with the exact same $NEW_THING. I have a harder time understanding how the project is organized than I do grokking $NEW_THING. And when I ask "why not $THAT_THING instead?" I get blank stares, and amazement that someone solved the problem a decade ago.

Sure, I've seen a few paradigm shifts, but I don't think I've seen anything Truly New in Quite Some Time. Lots of NIH; lots of not knowing the existing landscape.

All that said, I hope we find tools that work for people. Remix however you need to for your own edification. Share your work. Let others contribute. Maybe one day we'll stumble on some Holy Grail that helps us understand sooner, be more productive, and generally make the world a better place.

But nothing's gonna leave me behind until I'm dead.




A little past 50 here. I can relate.

Yet some things have barely changed. Beyond 25 years ago it was Solaris or HPUX, using C, talking TCP/IP and mucking about with SQL. Some of us were sad at SVR4 as we still preferred the BSD view of things. Running Unix and 10 terminals on some 680x0, learning to get efficient in vi. Some preferred emacs. I was excited for the future, especially of hardware and the OS, as I'd seen it on the Amiga. I'd seen some of the crazy ideas being thought about, like we might have this mad thing called VR. That looks interesting. It's insanely expensive and made me feel a bit seasick, but maybe it'll be the cool fad of of 1990. We'll see.

Fast forward to today. Oh. Well there's a lot more switches, and security has changed some things, desktops have pushed ideas because they think they're phones, but I'd never have conceived how similar so much would be. BSD isn't forgotten, mysql grew up and then some. Emacs and vi are still be being discussed. Why aren't old programmers more popular?

I'm still interested in hip new things. I'm losing some enthusiasm though. There's an increasing number I haven't got to, as there's just too many of them. Too much is just fashion. So often I'm struck by how $NEW_THING gives something, but takes something or adds needless complexity, yet manages to be a variation on an aging theme. In frameworks the best $NEW_THING fashion changes hourly.

Leaves me rather disappointed, but I'm still looking. And hoping.


> Why aren't old programmers more popular?

Cost and reduced willingness to work the death march hours that many places want the new hires to work/are willing to work to "prove" their worth.


What about applying concurrency to more problems? So many of the basic ideas were discovered and tried a long time ago, but widespread application of concurrency still hasn't caught on.


Concurrency is still as hard today as it was 40 years ago. Prudent developers avoid it whenever possible.


> Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same, this time someone has added another C to MVC; or the put their spin on an API for doing X;

But that is only for the 'abstractions' they implement; the details are another thing.

I feel the only thing that really changed is that now big companies with PR / marketing departments are on the case while before it was more a geek thing. There is nothing fundamentally different in new frameworks (heck, most are them are just rehashes of others that are many years old with very small improvements), but suddenly the echo chamber makes it out to be vital to your career.

Like React. It's nice but come on... Every frontend programmer I know is fretting that they are not into it enough because you will die or something if you are not. That seems good marketing by Facebook to get such a solid drive behind it so quickly. But it's not needed for anything; you can still just use what you used before and often faster/better (because you are good at what you did for many years right?) and you don't have to bother with learning the latest thing all the time, while, with React, because you are drinking the cool aid, you have to update/refactor/redo stuff often because of changing libraries and new insights. It would make you stressed if think you feel you have to keep up with all of that.

Also, some tools seem to have just been made to look smart. Really, something like Webpack doesn't have to be that obfuscated. It really looks like it has been made like that just to say; 'ah, you just do not get it'. I see people (20somethings included) really sweating in their keyboard when trying to figure out more than the basics of that thing; so why are people using it? Why is it actually popular?


> React

Similarly on Flux: I implemented an immutable store where only incoming messages change state and changes are distributed to widgets in 2005.

Around the same time we started using feature flags in our application.

We also did CI around 2000. Didn't call it that of course.

Never did it myself but heard of TDD long before it became a thing. Again under a different name.

Erlang. Superior technology that forms the heart of new kid on the block Elixir. Built long before the Bay Area CS grads re-invented ageism and claimed young people to 'be just smarter'.

Sure some things are new, but there's just no respect for experience any more.


What about Go? I like go a lot because I feel like they did hit the mark and they did learn from their predecessors. Strong standard library. Clean syntax. Easily compiles to every major platform and architecture (all you need to do is set an environment variable). Everything statically compiled (as a software distributor, this is super nice!). Phenomenal testing framework.

It's not all that innovative, it's just very easy to use, and very well put together.


I tried it and was immediately put off by the monumentally dumb approach to package management (or rather: its non-existence), lack of generics and approach to exception handling.

IMHO If go were invented by anybody other than Google it wouldn't have enjoyed anywhere near the success that it did.

I'm usually extra suspicious of "hot" open source technologies with a marketing budget and/or tech behemoth behind them. It's not that they can't be fundamental steps ahead it's that the marketing can end up causing undeserved popularity.


I'm in the camp that actually prefers -- strongly prefers -- Go's style of error handling over exceptions. It's not a perfect language of course, but when you say it's successful only because Google marketed it, I think you're off by quite a lot. Go is popular in part because of the things you hate that other people like. Also, Google hasn't really done any real "marketing" of Go that I can see. They released it with what amounted to enough fanfare to say "Hey, here's a thing. We think it's pretty cool. Here you go."

And the point of Go is that it isn't trying to be "fundamental steps ahead". If anything, Go is an opinionated statement that programming was better in 1992 than it is today, so let's just fix the annoying shit from 1992 and pretend the word "enterprise" had continued to just mean the ship from Star Trek.


I have similar gripes with Go. I really love programming with CSP techniques and Go does a good job of implementing a lot of this at a lower-level. Clojure also draws from CSP and from Go's implementation of CSP, but lacks some of the nice things Go has going on here at lower-levels, however I end preferring Clojure for so many other reasons. I still would not shy away from Go, but it's far from a perfect language and doesn't really improve on a lot of things that similar languages don't do well.

I think it's not just Google, but the cult of Rob Pike, Ken Thompson, etc. I appreciate their work on so many things, so I don't mean any ill-will. I was a user of ACME editor and Plan9, but just like those projects, Go has so many polish issues in critical areas. The package management alone is ridiculous. The common theme in all these projects is for every good idea, there is an order of magnitude more critical mistakes or divisive ideas that cripple the end result in some way.

Go is alright, but it's not really that interesting to me or a monumental leap. There are a few ideas that others could cherry-pick and put into much better designed languages to take it all a step further in my opinion.


Bingo! :)

I'm a 50+ developer who has increasingly felt that "life is too short" to spend all my time trying to keep on the latest fads.

That being said, I have found myself doing more "fun" programming in Go lately, precisely because, as you said, they hit the mark with how simple and clean the language is, with little of the baggage that comes along with a lot of languages and frameworks (I'm looking at you Rails and React) today.


I am a language freak (hard to admit as who is not these days) but have to say I did not try Go yet... so yes, I must.


As a language geek myself, I've been working a lot with Elixir lately at $WORK, which has been an absolute dream. There's pioneer taxes and bug fixes we have to push upstream occasionally to some of our dependencies, but it's been extremely pleasant so far. Clean syntax, well thought out internals, 30 year old host platform designed for server programming, Lisp style macros (it really feels like a Lisp) , an "explicit over implicit" mentality, immutable everything, and a wonderful "Rails without magic" web framework to go along with it.


That almost sounds exactly like the Java argument from the late 90s

clean syntax standard libraries write once, run anyone


React is a lot more than it seems. The essential problem it tries to solve is componentization: how can you have a large team of varying skill work on separate parts of a page in a web application without one part breaking the others in hard to detect ways?

Many people focus on JSX or virtual DOM and muse about how that just duplicates the browser-- what's the point? But that isn't the point. It's componentization and encapsulation for large teams at scale, and for Facebook it works, it makes them more successful at pulling together disparate modules on a single page than their competitors, so there is some meat to this $NEWSHINY.

Now, if the browser itself had been properly designed years ago to be componentized and encapsulated, we'd have those features without inventing a new DOM or creating a new tagging language or isolating CSS. There is a web components standard that offers much of the same thing... It came out around the same time as React, but still isn't widely used outside of Google Polymer. Truth is these were developed in parallel before they knew of each other.

Anyway, React makes a lot more sense if you think of it as rewritting the browser from outside the browser. In the old days the stack that would have been hard to impossible-- you would have simply started your own browser. And people would have shamed you for not following existing standards, so you would have petitioned the W3C or served on committees to bring your real-world cases for years, being mostly ignored.

Now, devs have the ability to change the browser quickly and find things that work (and also create chaos and suffering).

Meh. Kids. ;)


Honestly mate, you're just talking about the same old, same old.

Every framework is about componentization and encapsulation.

You could take React out of your post and replace it with any framework name in the last 40 years and it would have made 'sense' at the time.

Like the author said, right now a veteran looks at react and sees the mistakes of 15 years ago when we mixed code and presentation.


The Webpack story is interesting because its author isn't (well, wasn't) even a JS developer in his day job or was part of the JS community. I don't hate Webpack but I really have no idea where it came from. Browserify was made by one of Node's ecosystem top contributors, so at least that makes sense.


And webpack's documentation is the spawn of Satan. Going outside the typical use cases is downright impossible. While it has the widest feature set out there, it's incredibly difficult to believe in its reliability. That said, since you can always QA your bundle, reliability isn't as much of an issue I guess...


> And webpack's documentation is the spawn of Satan.

An insult to our Master, Satan would never create something so awful :P


FWIW webpack's docs are getting much better for v2


Late forties developer here.

> Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it.

There's my problem right away. I can't just "take a weekend" to learn some new shiny thing. I have a partner and children who I want to be with at the weekend. And I'd rather go climbing or hiking or even just go out on my bike, than learn another damn api. Twenty years ago I had evenings and weekends to burn. Now I don't.


I'm 26, and modulo the kids, this is how I feel as well. I've been programming professionally for almost ten years, and I feel less and less pressure to keep up with the latest packages on NPM or whatever the flavor of the month is. It's much more interesting to know what has been tried before, and why that didn't stick. Like this newLISP thing, why should we suddenly start writing Lisp? The language is older than C, for god's sake.


> why should we suddenly start writing Lisp? The language is older than C, for god's sake.

If so many "modern" languages still copy features from Lisp (hello C++) then why not use the real thing?

All those nice "new" features which Python and C++ are praised for (lambdas, closures, list comprehensions) have been available for almost 50 years. Lisp was way ahead of its time. It just lacked the hardware power which we have today. It is still ahead of our time. Consider Lisp macros, Genera and MCCLIM. MCCLIM is an interactive GUI for shells which has been neglected for decades. It is being revived right now to make it available for modern Lisp distributions. Modern Lisp just lacks one thing to be the real deal: a native Lisp Machine.

https://common-lisp.net/project/mcclim/


"If so many "modern" languages still copy features from Lisp (hello C++) then why not use the real thing?"

Because if it hasn't "succeeded", for suitable definitions of "succeeded" in 50 years, as an old fart approaching 40 myself my assumption is that there is a good reason. I was much more willing to believe the "everybody else is just stupid and can't see the obvious brilliance" reason when I was younger, but as I've learned more, I've learned just how many factors go into a language being successful. Lisp may have all the features, but there are also good reasons why it has never really quite taken off.

That said, it is also true that using languages that bodge Lisp-like features on to the side after several decades of usage is generally inferior to languages that start with them. It's one of the reasons I'm still a big proponent of people making new languages even though we've got a lot of good ones. (I just want them to be new languages somehow; a lot of people just reskin an existing language with a slightly different syntax and then wonder why nobody uses it.)

But they've all been listed before, and I don't expect me listing them here will change anything, so I'll skip it here.

(I did a quick check on Google Trends; Clojure seems to not be growing. It's not shrinking, but it's not growing. That was Lisp's best chance for growth I've seen in a long time, and that window has probably now closed.)


The reason why Lisp is not mainstream is its power. It makes writing DSLs extremely easy so that everyone can write his own DSL to solve a certain problem. Such style of writing however makes Lisp code unsuitable for group working, and hence unmaintainable. It explains (imho) why there are so many unmaintained Lisp projects.

Clojure has a different problem. It is based on the JVM infrastructure which was (imho) an unfortunate decision. Access to Java features is nice but dealing with Java stack traces is not fun. Also the usual startup time of clojure apps in the range of seconds is not acceptable (Tcl/Tk apps start in a fraction of a second). AFAIK clojure is also not suitable for mobile app development. The clojure developers should have used their own VM, or they should have provided a Clojure/Lisp compiler for native compilation. LuaJit has demonstrated how incredibly fast a suitable VM can be.


There's a lot of truth to this. One thing I've noticed is that there are many different styles of writing Common Lisp; many of the older styles (heavy use of lists as data structures or very heavy use of macros) tend to not work well (at all!) in team software engineering environments. OTOH, if you agree on some coding standards with your team, team software engineering with Common Lisp can be a real pleasure. Of course that's true in any language, but if you're a lone ranger, CL makes it much easier to shoot yourself and your team in the foot than most other languages.


I have felt some of the pain and gripes you have with Clojure and while I am not denying that some of them are very real, I am not sure if you understand the mentality and philosophy behind Clojure as far as most of us who use it seem to have decided at least. Put succinctly, it is to balance practicality with features to get real work done.

Clojure is quite wisely based on the JVM because the idea was not to create a Lisp replacement or a new Lisp, but rather a practical Lisp. Some of the historical problems with Lisp and many other languages like Smalltalk were related to specialized hardware, development environments, and/or ecosystems. Clojure did away with most of these concerns by attaching itself to one of the largest existing environments.

Using the JVM was a wise decision that has/had many advantages not limited to:

- Ability to leverage an already huge ecosystem of libraries, tools, servers, etc. that are well-tested

- Already battle-tested and working package system - this is severely underestimated by some popular languages

- Justifying its existence as a tool along side other JVM languages within an organization, not a full replacement

- Existing, highly optimized JIT

- Reasonably fast, nearly for free

Originally there were some plans to expand more to other environments, for example the CLR, but the JVM got the most attention and in the end this was practical.

As for some of the real drawbacks:

- Clojure alienates Lisps zealots and people who could never grasp Lisp. IMO, this is a stupid person/psycho filter so I don't see it as a drawback but it's worth noting.

- Startup time as you note. Startup time sucks, but it has gotten better and there are workarounds that are the same ones you could use for Java apps traditionally. The mobile dev issue you note is somewhat wrong and without a huge explanation, one option is to more or less enjoy a huge part of Clojure via ClojureScript, or in other words, targeting JavaScript and using things like React Native.

- Garbage. This is an issue of a lot of things on the JVM and Clojure is no exception. You can work around this somewhat with specific coding practices if you need to, but yeah, Clojure isn't going to be good if you can't fathom an eventual garbage collection cycle.

- No TCO, which is mainly a JVM issue if I remember right. This would be great, but it's a tradeoff and Clojure has negotiated this somewhat by providing what I feel is more readable code than when I worked in certain Lisps.

- Some legacy baggage in the standard lib, for example Clojure.zip. Recently they've been more brutal about what can and cannot go into standard libs. Every language though suffers from this a bit.

Regarding developing its own VM, I think you again miss the point about practicality. If Clojure did this, it would have been even more years before its release. Moreover, comparing to Lua is a bad example as it is a very different language (yes, I used Lua professionally). Lua achieves a lot by really keeping it simple, and while there is merit to that, Lua leaves a ton to be desired which I won't get off-topic about here.

So Clojure could work better in its own VM, but then you'd lose the JVM ecosystem along with many other things. I personally would rather have the ability from the beginning to reach from a huge amount of libraries rather than have nothing but what other people writing the language provide or via crude things like calling back into C. There are many talks about all of this, many from Rich Hickey himself. I think you really missed the point of Clojure and I am more of the mindset that I am glad it exists and not in an ever state of flux so that I can use it today, get things done, and not have it relegated to some research language I could never justify in a workplace. And no, I am not a Clojure zealot, I used about a dozen languages in any given year depending on my project and interests. There's a lot I prefer in Lisp over Clojure, but I see Clojure as taking some lessons from various Lisps rather than trying to be the one true Lisp.


I understand why Clojure deliberately focuses on the Java ecosystem. If I still would be in Java development today I likely would use Clojure. If I were in web development I likely would use Clojurescript. Currently I prefer the Emacs/SLIME/SBCL toolbox which is more responsible (and Nim by the way).

My humble two cents to the Clojure team:

1) You should implement a cache mechanism ("save-image") for native code so that at least the annoying startup time of clojure apps is gone. I wonder why Java doesn't support native caches to this day.

2) The weird Java stack trace problem could be solved by providing an individual stack tracer which is close to the source code. I know that this is not possible for Java libraries but at least clojure stack traces should be presented in a more convenient manner.


Hence Pixie [1], which builds a Clojure-inspired language on a JIT VM. I'm not sure what the status is, but it appears to be active.

[1]: https://github.com/pixie-lang/pixie


> Clojure seems to not be growing … that was Lisp's best chance for growth I've seen in a long time

I wouldn't call Clojure a Lisp; it's a Lisp-like language with some interesting ideas, but not really Lisp at all.

As to why Lisp has failed to take hold where other languages have succeeded, I'll use a G.K. Chesterton quote: 'Christianity has not been tried and found wanting; it has been found difficult and not tried.' It's not that Lisp itself is difficult; it's that Lisp is different from lesser languages, and people have difficulty learning something different from what they're used to. Lisp has not been tried and found wanting; it has been found different and not tried.

So … try it! You may be surprised.


https://www.theguardian.com/society/2016/apr/07/the-sugar-co...

"In 1972, a British scientist sounded the alarm that sugar – and not fat – was the greatest danger to our health. But his findings were ridiculed and his reputation ruined. How did the world’s top nutrition scientists get it so wrong for so long?"

Without advancing an argument for Lisp, I'd just note that it's possible - in any discipline - for something to be decades (or more) ahead of its time.

Of course, that wouldn't mean that everything old is avant-garde just because it said it was. But, if people are still or suddenly looking at it, and looking at it hard, I would at least take that as a signal to give it some due consideration. It's apparently fighting age, and winning to some degree.


I agree with the parent about "new features" and I'll add a few things.

I can't stand the mentality of old = bad or not popular = bad. Most of the time, like anything in life, things don't "win" for being the "best" technically or for the most merit. There are usually many factors at play, and marketing, misinformation, stupid people, timing, and more have a huge role. While some of these factors may be good reasons, for programming, many of them are rubbish. We go down the bad paths more than the good ones in computer science it seems.

There are countless technologies that were way ahead of their time and for various reasons didn't end up market leaders today. Lisp is one, Smalltalk is another. Even Object Databases and various permutations of "NoSQL" databases existed for a long time. CSP is yet another that is being "rediscovered" via Go, Clojure, and some other languages. The list goes on for hours. Whether it was/is hardware or many other reasons, these things didn't "succeed." Despite that, it doesn't make a technology or idea useless if it's a good fit for the task, nor does it make it worth ignoring, not learning, or improving.

If there's one thing I've noticed from my considerable years in software dev, it is that most people are wrong most of the time about most things. Look at science - the field has historically been full of naysayers, people who cling to the past for their own agendas, saboteurs, fools, politically or religiously motivated morons, and so on. If we just always went with what the masses say or for that matter, even the so-called "experts," we wouldn't have any scientific advancement at all. Looking back in science, we can also see that many people had discovered or nearly discovered quite a lot. Although somehow we eventually unearthed some of these discoveries, the advancements they created never came in their lifetime or even century or eon.

There's a lot wrong with modern Lisp today, but most of the foundations are solid and it gets pretty tiring of people using Lisp like a pejorative, especially if it's because they lack knowledge about it or do not understand it, or perhaps worse, have never used it.


I'm still amazed that Python is popular. It is in every way a dumbed-down Common Lisp. Common Lisp is also compiled to native machine code, so it runs dramatically faster than Python. (The fact that Python isn't compiled has nothing to do with its dynamism and everything to do with how good the compiler writers of Common Lisp are.)


> I'm still amazed that Python is popular. It is in every way a dumbed-down Common Lisp.

The smaller and more focused feature set is probably one of the reasons its popular. There's also ergonomic reasons, which have a big effect on whether people who aren't initially fully committed tune out or blow up early on in their encounter with the language.


I guess you're right. I get annoyed hearing people say "all the parentheses drive me crazy" but I'm driven crazy by the indentation-bracketing of Python. I'm sure once I got used to it in a good editor it wouldn't be an issue.


I'm 26 and I'm just starting my career. You're making me feel old.


i'm 28 and am still useless. don'T feel bad.


NewLisp is over 25 years old.


> It's much more interesting to know what has been tried before, and why that didn't stick. Like this newLISP thing, why should we suddenly start writing Lisp? The language is older than C, for god's sake.

I wouldn't say Lisp "didn't stick"; it's been in continuous use for half a century. Writing Lisp code would be anything but "sudden".

Of course we can say the same thing about COBOL, but that seems mostly due to the inertia of legacy applications; its target demographic of business applications now favours languages like Java.

On the other hand, new projects are being written in Lisps, and it's still spawning new languages (e.g. Common Lisp, Clojure, Kernel, all the Schemes and their descendents, etc.). This seems to indicate that people want to use/extend Lisp, rather than having to use it (although there are certainly legacy Lisp applications out there, which may or may not be horrendous to maintain).

Also, as Alan Kay points out, Lisps frequently "eat their children": someone invents a "better language", then someone else figures out a way to do the same thing as a library in an existing Lisp. This means that very old dialects/implementations may be perfectly capable of using paradigms/fads/etc. which were only invented/popularised much later, e.g CLOS for OOP or call/cc, shift/reset, etc. for coroutines/async/promises/etc.

In contrast, those languages which truly "didn't stick" are seldom heard of, outside the "inspired by" sections on Wikipedia. Many of are uninteresting, such as machine- or company-specific dialects, which died along with their hardware/sponsor. Others can be very informative, especially regarding "flavours of the month" and "paradigm shifts":

- Simula (arguably the origin of OOP as found in C++, Java, C#, etc.)

- ALGOL (the archetype of lexically-scoped procedural languages, like Pascal, C, Go, etc.). Actually, I still see "idealised Algol" discussed in cutting-edge programming language research, so maybe it still has some life!

- SNOBOL, which enjoyed great success in the world of string manipulation, and now seems to be completely replaced by awk/sed/perl/tcl/etc.

- MUMPS, which integrated a key/value database into the language. Still used, but seems to be for legacy reasons like COBOL (e.g. see http://thedailywtf.com/articles/A_Case_of_the_MUMPS )

- Refal, which relies on pattern-matching for evaluation (now widespread in the MLs (Standard ML, OCaml, F#, Coq, etc.) and Haskell-likes (Haskell, Clean, Miranda, Curry, Agda, Idris, etc.). Also notable for using supercompilation.

- ABC, the prototypical 'scripting language', and a direct influence on Python.

- FP, which emphasised higher-order programming.

- Joy, a pure functional language based on stacks and composition, rather than lambda calculus.

- REXX, widely used as a "glue" language; I think AppleScript has comparable use-cases these days (I don't know, I've never used an Apple OS). Seems to be supplanted by shells and embedded scripting languages (e.g. Python, Lua, JS)

- Self, famous for prototypical inheritance and the origin of the "morphic" GUI.

- Dylan, effectively a Lisp without the s-expressions. Created by Apple, but quickly abandoned.

- Prolog, a logic language based on unification. Still has some users, but didn't take over the world as some thought it would (e.g. the whole "fifth generation" hype in the 80s).


The ICON language is another good one that never took off. It was the successor to SNOBOL and was built around generators and goal-directed programming; two things that are again prominent in stream-based metaphors.

More about Prolog: Prolog, like Refal, had pattern-matching long before the modern functional languages. It's an extremely useful feature. Prolog is also the best database query language ever invented, which is why systems like Datalog and Datomic borrow from it heavily.


Another (very) late forties dev here too. Your use of the word "burn" concerns me. I'd prefer the word "invest".

As I've said before on this subject, if you're careful what shiny new thing you bother to learn, and restrict yourself to doing it every couple of years rather than every weekend, I find the family and kids will never notice.

And anyway, at our age time flies so damn fast the kids'll be grown up by next weekend anyway :)


Early fifties here.

> Your use of the word "burn" concerns me. I'd prefer the word "invest".

I agree, but... sometimes it really is "burn". I don't keep up with front-end tech these days. If/when I need to work with it I'll learn it, and not before. If I spent time getting up to a real working knowledge now, there's every chance that in, say, 4 years when I needed to work in that space that I'd have to start learning from scratch again rather than brush up.

I do spend time learning new (to me) things, but I'm very selective. I devote most of my time to learning things that will give me alternate ways of doing things, rather than learning every nut and bolt of the latest framework.


Good point. Appreciate your concern, but I wasn't being clear.

By time to "burn" I was referring to all the time I had in my twenties: when spending an evening or a day or a weekend doing nothing in particular didn't seem like much of a waste.

Whereas now, "me-time" comes in much smaller units (maybe an hour after the kids are in bed and before I get tired). And I'm very careful to invest it: I recently spent eight months studying UX on Coursera, and am getting back into electronics and Arduino after a break, for example. I just don't have big blocks of time any more. Certainly not whole weekends.


I'm 39. There's no denying tech requires you to constantly keep up. But in my opinion, it's not as bad as most make it out to be. For the most part I read blogs while lying in bed at night (about 15 minutes or so), and I always have a side project going that I get to here and there. I still maintain a healthy life balance and get to enjoy things outside of tech just fine.

To be fair, I don't have kids. I'm sure that's a huge factor.


The frequency with which "hot new things" appear isn't every week. I'd venture a guess and say one weekend every two months. I don't know where you live, but here the weather isn't perfect for 52 weekends of the year. Some weekends it's not even worth sitting in front of the computer - total laziness with the family is ideal.


Soon in my forties. When I need to take a few days to look into something, I do it as part of my work. That's part of the maintenance of having a senior and experienced programmer.

Some times, my boss asks me about a new fancy tech. "I'll look into it" means I'll take a few hours of my time to give a good appraisal of it.


While I'm not old, I have been a professional developer for 17 years and feel that first downwards curve in the author's graph (and I get frustrated at the younger programmer's reinventing the wheel from time to time).

However, I never find myself confronted with things that are not genuinely new and interesting to learn and work on. A lot of these things are not new at all, but they are new to me: statistics, linear algebra, machine learning, compiler construction, PL research, model verification, graph algorithms, calculus, engineering modeling, vectorized programming, GPU programming, geometrical computation, and it just goes on and on. In each of those, you will have the fads of the day, the current hot framework, the second current hot framework, the old framework that works better. At the end of the day, I get the textbook, look up university courses on youtube, pick whatever framework shows up first in google, and spend time on the fundamentals. As a crude example, I may have to look up how to do a dot product in numpy / matlab / mathematica / c++ / R every second day, and when i learn something most of my programming is SO-driven, but I also can perfectly write a dot-product in clojure/factor/elixir/arm assembly if you asked me to, and then do a vectorized dot-product in CUDA/Neon SIMD/VHDL because I spent time on the fundamentals. The best thing that happens is when you start to see how one technique appears in so many different fields (for example SVD).

Nothing is new, but most of it is new to me.

After that I do spend a significant amount of time researching my tools (IDE, supporting apps, build systems, frameworks, compilers, programming languages), but that's the craftmanship part of it, and is kind of like doing the dishes and going to the farmer's market to have a nice kitchen to cook in and great ingredients to cook with.


There's an eternal war between "avoid reinventing the wheel" and ahistorical "not invented here", isn't there?

Because the profession is so heavily skewed towards the young and self-taught, people don't seem to know about the solutions of a decade ago and their merits and demerits. This is partly why software componentisation as "parts catalog" has never really taken off. It's easier to reimplement or independently reinvent something than it is to find the right existing solution and learn it.

It's as if it were easier to turn every bolt you needed on a lathe than to go to the shop for them.

(The closest might be npm, but then we see what an engineer would call an excessively large bill-of-materials, as trivial projects pull in thousands of dependencies)


Software componentization never took off because every attempt at it either leaks abstractions like a sieve or is so purely functional as to be impractical for real world use.

Shall I list the componentization tech of yore? CORBA, OpenDoc, Java Server Faces, COM, DCOM (oh god, CORBA again!), SGML (the original component framework), XML zoo (oh god, CORBA a third time).. JSON zoo, (a flippin' fourth time, are you kidding?!)

componentization is something we're still figuring out. Alan Kay and SmallTalk were the closest to get to it (see previous comment about practicality though) and the mainstream just now is starting to think of JS and Ruby as "novel". NPM? Please.

We have a long way to go before componentization actually works. So yes, I guess I agree that it's simply easier to reinvent things to a specific context than solve the problem of sharing code context-free.


You forgot microservices. I'd forgotten Java Server Faces, was that related to Java Beans?

I'll join you in a glass of "oh god, CORBA!" At least one good thing about the web is that people have given up hoping that RPC could be transparent.



CORBA and DCOM were really not great but I don't think you're qualified to have a go at these things if you think that SGML is related to DCOM or that XML and JSON are "componentization tech".

I'd also note that despite how unfashionable it is and was, COM was a remarkably successful component framework. A lot of Windows apps use COM heavily and not because they were required to do so - they componentized themselves using COM because they wanted to and it delivered real value to them.

In addition, I'm not sure how you are defining "component", but the term is rather similar to library, and modern apps frequently pull in enormous quantities of libraries. It worked out OK, actually.


That's the constant battle. "Oh, this library kind of does what I want, but this behavior kind of sucks, and this bug hasn't been fixed... Do I shoehorn it in and write all the glue and adapter code? Fuck it, I'll just start over, and build yet another grid component, or datepicker, or rich-text editor."


"This is partly why software componentisation as 'parts catalog' has never really taken off."

Well, that and the fact that OO turned out to not be a very good mechanism for building the parts catalog on. In the end I'd judge it as only slightly more successful than procedural programming on that front.

For instance, Haskell's "part's catalog" is somewhat smaller than other languages. But the parts do what they say they will, and generally go together pretty well once the community is done chewing on them and fixing them up. (Here I mean fundamental tools like parsers or text template systems, not merely "libraries to access this API" or "bindings to this particular library.) All those restrictions that go into Haskell are there for a reason.


>There's an eternal war between "avoid reinventing the wheel" and ahistorical "not invented here", isn't there?

There is a iternal itteration between reading disgusting Documentation (nonexistant one) and finding the hidden shortconmings of existing solutions - and just doing the only "open source" that is accepted in every company- rewrite it yourself.


   eternal war between "avoid reinventing the wheel" and ahistorical "not invented here"
Did you intend to say something else? Because that's the same thing twice: you reinvent a wheel because the other wheel was 'not invented here', so if you avoid reinventing a wheel, you are suffering from NIH.

It's hard to avoid reinventing the wheel if all you know is what was invented 'here' and 'recently'.


The more I learn about fundamentals (recently non determinism + predicate logic) the more I realize the trends are shallow (or not so shallow) obfusction of the same basic blocks. And pardon the following hint, I found FP a pretty good vehicle to express these blocks in an abstract manner. Binary operations, composition, accumulation, iteration, induction, state transitions.


What's the last truly new thing you can think of? I'm interested because I am young (22) but have studied programming language paradigms and history and I also agree a lot of "new" stuff is old.


New stuff: Machine learning that works. Rust's borrow checker. 3D SLAM that works. Voice input that works. Lots of image processing stuff. Machines with large numbers of non-shared-memory CPUs that are actually useful. Doing non-graphics things in GPUs.

The webcrap world is mostly churn, not improvement. Each "framework" puts developers on a treadmill keeping up with the changes. This provides steady employment for many people, but hasn't improved web sites much.

An incredible amount of effort seems to go into packaging, build, and container systems, yet most of them suck. They're complex because they contain so many parts, but what they do isn't that interesting.

Stuff we should have had by now but don't: a secure microkernel OS in wide use. Program verification that's usable by non-PhDs. An end to buffer overflows.


Old timer rant:

IMO Machine learning mostly doesn't work (yet) with a couple exceptions where tremendous amounts of energy and talent have made that happen. For example, image processing with conv nets is really cool, but the data sets have been "dogs all the way down" until very recently. And for the past few years, just getting new data and tuning AlexNet on a bunch more categories was an instant $30-$50M acqui-hire. Beyond a few categories, its output amuses and annoys me roughly equally.

But the real problem with ML algorithms IMO is that they cannot be deployed effectively as black boxes yet. The algorithms still require insanely finicky human tuning and parameter optimization to get a useful result out of any de novo data set. And such results frequently don't reproduce when the underlying code isn't given away on github. Finally, since the talent that can do that is literally worth more than its weight in gold in acqui-hire lucky bucks, it doesn't seem like there's a solution anytime soon.

Voice input? You gotta be kidding me. IMO it works just well enough to enter the uncanny valley level of deceiving the user into trusting it and then fails sufficiently often to trigger unending rage. Baidu's TypeTalk is a bit better than the godawful default Google Keyboard though so maybe there's hope.

GPUs? Yep, NVIDIA was a decade ahead of everyone by optimizing strong-scaling over weak-scaling (Sorry Intel, you suck here. AMD? Get in the ring, you'll do better than you think). Chance favored the prepared processor here when Deep Learning exploded. But now NVIDIA is betting the entire farm on it, and betting the entire farm on anything IMO is a bad idea. A $40B+ market is more than enough to summon a competent competitor into existence (But seriously Intel, you need an intervention at this point IMO).

Machines with lots of CPUs: Well, um, I really really wish they had better single-core CPU performance because that ties in with working with GPUs. Sadly, I've seen sub-$500 consumer CPUs destroy $5000+ Xeon CPUs as GPU managers because of this, sigh.

Container systems? Oh god make it stop. IMO they mostly (try to) solve a wacky dependency problem that should never have been allowed to exist in the first place.

The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop.


"The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop."

One of my favorite features now on my iPhone is "Reader View". Have a new iPhone 7, which is very fast, but some pages still take too long to load, and when it finally does, the content I want to read is obscured with something I have to click to go away, and then a good percentage of the screen is still taken up by headers and footers that don't go away. The Reader View loads faster, and generally has much better font and layout for actually reading the content I'm interested in.

All of which is to say, the sole purpose of what a lot of web developers are working on today seems to serve no purpose other than to annoy people.


> his provides steady employment for many people, but hasn't improved web sites much.

Just a week ago I made the startling discovery that FB's mobile web app it's actually worse than a lot of websites I used to visit at the end of the 90s - early 2000 on Netscape 4.

Case in point, their textarea thingie for when you're writing a message to someone: after each letter push there is an actual, very discernible lag until said letter shows up in the textarea field. So much so that there are cases when I'd finished typing an entire word before it shows up on my mobile phone's screen. I suspect it's something related to the JS framework they're using (a plain HTML textarea field with no JS attached works just fine on other websites, like on HN), maybe they're doing an AJAX call after each key-press (?!), I wouldn't know. Whatever it is, it makes their web messenger almost unusable. (if it matters, I'm using an iPhone4).


FB does auto-complete for names, groups, places, and so on. So for each char it does a callback to see if it should display the dropdown. Using a swype style keyboard ia a bit nicer because you're only adding full words at a time.


AFAIK they log everything you type, even if you don't submit it. So maybe that has something to do with it?


Hasn't improved websites much? I remember the days of iframes and jquery monstrosities feigning as web "applications". The idea o a web-based office suite on the web would have been laughable 20 years ago.

My guess is you haven't actually built a real web application. The progress we've made in 20 years is astounding.


Another way to look at it is that, even after 20 years and huge investment from serious companies, we can still only build poor substitutes for desktop applications.

Don't get me wrong, it is amazing progress given the technology you have to fight. But in absolute terms it's not that great.


This is my view exactly. In the past 10 years I've developed both a web app [1] and a cloud-based native app [2]. Developing the native app was by far the more enjoyable and productive experience.

The great thing about native development is the long-term stability of all the components. I have access to a broad range of good-looking UI components with simple layout mechanisms, a small but robust SQL database, a rock-solid IDE and build tools - all of which haven't changed much in the past decade. Plus super-fast performance and a great range of libraries.

To put it in terms of the article: the half-life of native desktop knowledge is much longer than 10 years. Almost everything I learnt about native programming 10 years ago is relevant now.

Unfortunately, the atrocious deployment situation for native apps is also unchanged in 10 years (ie. "This program may harm your computer - are you really sure you want to run it?"). But on the other hand having a native app has allowed me to implement features like "offline mode" and "end-to-end encryption" that would be difficult or impossible in a web app. This has given my business a distinct advantage over web-based alternatives.

[1] https://whiteboardfox.com

[2] https://www.solaraccounts.co.uk


I really am very glad to hear someone writing in public what I've been mentioning to colleagues and all who would listen for the past few years.


I've never built a web application of any description. As a user, what are the improvements that I should be looking for that have been introduced over the past 10 years? It was longer ago than that that AJAX started getting big, and as far as I can tell, that was really the last key innovation for most apps: Being able to asynchronously load data and modify the DOM when it becomes available. I'm aware of other things like video tags, webgl, canvas, and such that allow us to replace Flash for a security win, but that seems replacing a past technology just to get feature parity with what we had a decade ago.

Everything else seems like stuff that makes things better for the developers but not much visible benefit to the user. I can understand where a comment about the web not being much better would come from, on the scale of a decade.

Go back 20 years, and you're talking about a completely different world; frames, forms, webrings, and "Best viewed with IE 4.0". But if '96 to '06 was a series of monumental leaps, '06 to '16 looks like some tentative hops.


There's actually a fair number of new features in the web today that you couldn't do in 2006 - offline access, push notifications, real-time communications (without a hack that breaks on most firewalls), smooth transitions, background computation, the history API, OS clipboard support, accelerometer access, geolocation access, multi-touch, etc.

Few websites use them effectively yet, at least in a way that benefits the consumer (several are using them to benefit marketers). This could be because developers don't know about them, consumers don't care about them, or perhaps just not enough time has passed. XHR was introduced in 1999, after all, but it took until 2004 before anyone besides Microsoft noticed it.


My first install of Netscape Communicator had an offline mode, and one of my colleagues recent told me how they had fully working live cross-browser video conferencing in 98. Their biggest competitor was WebEx, which is still around.

I think many of us underestimate what was possible to do in browsers. What has happened is that these features have been democratised: what took them months to build I can now pull of using WebRTC in the space of a weekend.


My very first software internship ever was getting streaming video to work for the Air Force Center Daily at MITRE Corp, back in 1997. I did it with RealPlayer, Layers in Netscape (remember them?) and DHTML in IE.

The thing is - the polish matters. You can't do viable consumer apps until they actually work like the consumer wants, which is often decades after the technology preview. You could emulate websockets using IFRAMES, script tags, and long-polling back in the late 90s, but a.) you'd get cut off with the slightest network glitch and b.) you'd spend so much time setting up your transport layer that you go bankrupt before writing the app.


Thank you for the explanation. Some of those I've known about (but they didn't come to mind in my original comment), and some are definitely things that I would've taken for granted (coming from a native application programming background). Those are all features added to web standards and implemented in browsers though, right?


They're all in web standards. Browser support varies but is generally pretty good for most of them.

They're taken for granted in native application programming, yes, but the big advantage of browsers is the zero-cost, on-demand install. This is a bit less of an advantage than it was in 2003 (where new Windows & Flash security vulnerabilities were discovered almost every day, and nobody dared install software less their computer be pwned), but there are still many applications where getting a user to install an app is a non-starter.


Most programs that existed before (lets be real.. programs were replaced with web apps now) defaulted to being offline only, or would sync. I miss those days.


> I miss those days.

Ditto. I like having a copy of a program that no one but me has access to modify, and I like that I don't have to rely on my ISP to use my computer. If I like a program, I don't want it to change until I choose to change it. I don't want to be A/B tested, marketed to, etc. I'd rather buy a license and be happy =)


> The progress we've made in 20 years is astounding.

And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now. In fact, I'd say that "open link in new tab" is dead on most of the new web "applications", I'm actually surprised when it works. The same goes for the "go back with backspace" thingie, which Google just killed for no good reason.

Copy-paste is also starting to become a nuisance on lots of websites. sometimes when I try to do it a shitty pop-up shows up with "post this text you've just copied to FB/Twitter" or the app just re-directs me to somewhere else. It reminds me of the Flash-based websites from around 2002-2003, when they were all the rage.


> And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now.

Use the basic HTML version. It's worse in a few ways but better in most others. Including speed.


'Back with backspace' was changed to CMD+left-arrow assumedly because a simple backspace can change the page unexpectedly for someone who thinks they are in a text field.


'Back with backspace' has been standard in the Windows file manager for as long as I can remember. Given that the file manager was the moral precursor to the browser of today it would have been nice to retain it.

Back with backspace!


Actually, it didn't go back on Windows XP. After using Windows 7 for 4 years, I am still not used to this 'new' behavior.


I sit corrected, thank you.


I know the reasons, I just think they're stupid. They've replaced one simple key-press with two non-intuitive ones. On my keyboard I have to move both my hands down in order to press the 2 keys, the backspace key was very easy to reach without moving my hands.

On top of that I actually have no "CMD" key on my keyboard, I have a "Ctrl" key which I assume is the same as "CMD" (I also have a key with the Windows logo which I had assumed it was the CMD key, I was wrong). KISS has gone out of the window long ago.


Alt + left arrow still goes back on ff and Alt + right arrow goes forward. It's been that way for quite a while.

The outlook Web app on the other hand sometimes blocks backspace from deleting a character, presumably to stop you inadvertently jumping back from inside a text field. This is only "on" when inside a text field in the first place, so if MS could do it, I don't see why Google's better engineers couldn't.


I've lost so many posts from hitting backspace without realizing I didn't have the input box focused I'm more than happy with this trade off.


Browsers have improved greatly and new web development frameworks are necessary to make use of those improvements but the actual process of building usable web application doesn't seem that improved. It's certainly not any easier to achieve pretty much the same results.

> The idea o a web-based office suite on the web would have been laughable 20 years ago.

What's laughable is how much effort has gone into rebuilding something in this platform with a result that is nearly the same (but worse) as what existed 20 years ago.


What kills me about a lot of the technology we use today is that few people, at least in positions of power are brave enough to pause every now and then and say, "WTF are we doing?" So many technologies live on because of so-called "critical mass," big investments, marketplace skills, and other things that are about anything other than the technology itself. These of course are mostly practical reasons and important ones at that, but at some point it becomes impractical to continue to cling to the practical reasons. IMO, the ability to do something truly different to make an advancement is often what separates innovators and intelligent people from the rest. When someone does try to act, the market essentially crushes them accordingly, making most efforts null and void, and dulling the senses and hope of everyone else watching, warning them not to try anything themselves.

x86 CPUs, web browsers, popular operating systems, and so on are all examples of this problem. At some point I really wish we could do something different, practical reasons be damned. It's sad that as many cool, "new" things we have, some of the core, basic ideas and goals are implemented so poorly and we are effectively stuck with them. This is one reason I hate that almost all software and hardware projects are so rushed, and that standards bodies are the opposite, but with only the bad things carried over. The cost of our bad decisions often weighs for much longer than anyone could imagine, just ask anyone who has designed a programming language or something major in software ecosystems.

As much as I enjoy all the new, shiny stuff, it makes me sad thinking about BBSs and old protocols Gopher that represented the old guard, alternate routes, and the fact that we really haven't come that far. Overall things of course are a lot better, but in many ways I often feel like we're treating the symptoms and not the cause or just going around in circles.

I could go on, but the rant would be novel length.


I find it super disappointing that Android, from an operating systems perspective, is so terrible. It's the newest popular operating system but it's no better than Windows, iOS, or Linux. It's a mess of passable API's, average security, etc. Rushed to market for, of course, practical reasons.

I don't see any opportunity in the future for any person or company to take all the lessons learned in the last 50 years and build something new that takes it into account.

Same with browsers; It's only now that we kinda know what a browser really needs to be but there's no way to start from scratch with all those lessons and build a new kind of web browser. There is always going to need to be what they currently are and build on what was already done.

I understand why, but it's still kind of sad.


20 years ago there was exactly one multiplatform office suite, and if you think StarOffice was better than the current incarnation of Google Apps I'm not sure how to respond to that outside of laughter.


I also kind of question whether or not a web-based office suite is truly multiplatform. For the most part, it doesn't interact with my desktop so it's really a single platform.

It's almost like saying Microsoft Word is cross-platform because I can RDP into a Windows machine from Linux. It's not really part of Linux, it needs a client to access an application running on a remote server. The only difference is how complex the client is.


Is the web browser the best technology to achieve "multiplatform" for an office suite? It makes sense from a purely practical sense but technologically it's pretty terrible.


Practicality wins pretty handedly here. Technology will always improve and web based applications will become more and more feasible as a result.

The flip side of that equation is that poor practical choices never improve because there will only be more platforms to target.

If we made development decisions based on technological constraints alone, how is it supposed to improve?


I wrote pivot table functionality in XSLT and XML for IE6. Pretty much 10 years ago. It's not that you couldn't do it, it was that it wasn't worth it.

Your whole multiplatform thing is disingenuous because back then there really was only 1 platform. Windows. So you've conveniently forgotten about the lotus suite, etc.

I also think you're vastly overestimating how far we've come in that timespan. Like v8 was pretty much 95% of the improvements, simply because you could do more than 1000 loops in JavaScript without killing the browser.

And yet today it is still harder to make a decent web app than it was in VB6 15 years ago.


I call BS. The progress that was made in the late eighties/early nineties was much more astounding. Going from character screens to fully event driven, windowed graphics mode was a much greater and more impressive change in a shorter time-frame.

the that time you had to learn a lot of new stuff in a short time too.


Well, in early 2000-s suddenly most of computers in the world became connected to each other and then after some time everybody got a powerful and networked computer in their pocket. I think this is pretty impressive too.


What exactly are you calling BS on? Nothing I said conflicts with your comments at all. It's not like progress can only happen once...


Have you tried to debug babelified react site with sourcemaps and whatever ...

I spent four hours just to find that the latest and greatest express don't have the simple global site protection with a password (that it had in version 3) like with .htaccess - it is just not possible anymore. There were no elegant solutions.

There may be some marginal progress while doing complex stuff, but doing the simple is harder and harder with each passing year.

Here is a simple question - is making working UI now easier than with MFC circa 1999. If the answer is no- than that progress is imaginary.

Every new thing is strongly opinionated, doesn't work and relies on magic. Debugging is nightmare and we have layers upon layers of abstractions.

Please for the love of Cthulhu - if any of you googlers, facebookers, twitterers read this - next time you start doing the next big thing let these 3 be your guiding lights - the code and flow must be easy to understand, it should be easy to debug, it should be easy to pinpoint where in the code something happens - all of the frameworks' benefits become marginal at best if I have to spend 4 hours finding the exact event chain and context and place in the framework that fires that ajax request.

/rant over


Do the web based office suites use these new libraries and frameworks? (React, Angular, Webpack etc) Genuine question as I thought that companies like Google use their own in house libraries like Google Closure which have been slowly built up over many years.


Google Apps use Closure, and after taking a peek at the source of Microsoft's online office apps, they appear to be using Script# or a similar C# to JS tool, as I see lots of references to namespaces, and lots of calls to dispose() methods.

iCloud's office apps use Sproutcore, which eventually forked into Ember (though the original Sproutcore project still exists).


Probably not Google Apps. But Angular is a Google developed framework.


In 2005, I worked on a crud app framework that used data binding driven from the server and doing minimal partial updates of the web page in a way that is much more efficient both in resources and development time than anything currently mainstream. That was the second version of something that used XML islands to do data fetching before XHR was introduced by Microsoft. IMO most mainstream JS dev is just barely catching up with what smart devs were doing in individual shops.


   Program verification
We really tried. It's hard, and verification of even simple programs easily hits the worst cases in automation. It will take another decade before program verification is digestible by mainstream programmers.


Spot on about machine learning. That's something that never worked all my life, but it seems like some of these young turks might be onto something there... I should probably sit up and pay attention to that one.


I think the fact that we'really starting to make native apps using Web technologies speaks to the progressadele for webapps.

The whole HTML rendering pipeline with advanced scripting support is really an innovation in itself. The downside is speed, but that's where we innovated the most, VMs for JavaScript.

Hopefully Web Assembly will really show the improvement we've made.


I work on navigation software (http://project-osrm.org/).

Many of the algorithms we're implementing (or at least considering) only exist in recently published papers, or sit behind unpublished APIs. There have been huge improvements in graph route-finding algorithms in the last decade, so much of it is new, interesting and it's far from run-of-the-mill implementation.

I'm 38 - I spent the first many years of my career doing CRUD development, first in Perl (late 90's), then Java/PHP (2000's). I skipped the JS craze, and now I'm enjoying my work more than ever improving my C++ skills (last time I touched C++ was 98, modern C++14 is a huge improvement) and working on backend, specialized algorithm implementation. It's great!

Experience is the best teacher. Kids don't listen to their parents, new developers don't listen to the greybeards until it's too late. This is the way things are :-)


>> backend, specialized algorithm implementation

Sounds great - how does one go about finding that sort of work in the industry?


I went looking. After years of cruft work, I took some time off to pursue other interests. When I was ready to come back to software development, I took a good hard look at the things I actually enjoyed doing, then went looking for organizations that did that stuff. I was fortunate to have the breathing room to be deliberate in my search.


I dropped a few pins and I'm genuinely impressed how fast it is.


A history of new things geared toward web app developers, starting with relevant popular technologies:

Late 1970s: microcomputers, explosion of BASIC and ASM development

Early 1980s: proliferation of modems, BBS's become big, Compuserve becomes big- people able to read news online and chat in real-time (but not popular like much later). software stores, software pirating, computer clubs, widespread use of Apple II's in schools. Microsoft Flight Simulator released in 1982 is first super-popular 3D simulation software.

Mid-1980s: GUIs- Macintosh 1984 based on ideas from Xerox PARC.

Late 1980s: Graphics had more colors, more resolution, faster processors. So- cooler games. File servers. 1987 GIF format, 1989 GIF format supporting animation, transparency, metadata- not that popularly used though- was a compuserve thing.

Early 1990s: Internet, realistic quality pictures, webpages/browsing, global file servers. Mosaic web browser. Most pages involved horizontal rule dividers that might be rainbow animated GIFs. Bulleted lists. Under construction GIFs were popular. Linux. JPEG format. Netscape. Blink tags.

Mid 1990s: Windows 95 (with Winsock). IE vs Netscape. IE had marquees. VBScript. (Mocha->LiveScript->)JavaScript. Applets. Shockwave. WebCrawler search. Altavista search. OOP pretty solidly how you should program now with C++ having been around for a while and Java slow but write once/run anywhere and OOP. Apache webserver. CGI: can email from webpage.

Late 1990s: ActionScript. Google search. CSS. Extreme programming. Scrum. JSP. Some using ORM via Toplink. Java session vs. entity beans. IIS. Java multithreading. Amazon gets patent for 1-click ordering. AOL instant messenger. PhP.

Early 2000s: ASP. .Net/C#. Hibernate ORM (free). Choosing between different Java container servers.

Mid 2000s: Use CSS not tables. Rails.

Late 2000s: SPA and automatic updating of content in background via Ajax. Mobile apps. Mobile web. Scala. Cloud computing start. VMs. Streaming video mature. Configuration management via Chef/Puppet.

Early 2010s: Cloud computing standard. Container virtualization. Video conferencing is normal- not just big company office thing. Orchestration of VMs more normal.

Mid 2010s: Container Quantum computing starts at a basic level (not important yet).

Note how I can't really thing of anything recently that has to do with new things in webdev.


Dejavu:

> Early 2010s: Cloud computing

1960s: Client/Server Architecture. Big servers and small clients.

> Mid 2010s: Quantum computing

before 1950s: Analog Computers

https://en.wikipedia.org/wiki/Analog_computer

There is nothing new under the sun. Analog computers passed away because they were not usable. Ok, quantum computing may be different but their practical use is also questionable.


> Dejavu: [...] 1960s: Client/Server Architecture. Big servers and small clients.

This is right and wrong at the same time. Right, because the Cloud reuses some basic concepts from the mainframe era (e.g., virtualization), which had been neglected for some time. Wrong, because writing your application to run efficiently on a mainframe is totally different from writing your application to run efficiently on Cloud infrastructure. Also, there is no thing such as small clients anymore, mobile apps and Web frontends are nowadays as complex as the usual 1980s fat-client software.

IMHO this is a very good example for technology not making circles, but evolving in spirals.


> there is no thing such as small clients anymore

This is also right and wrong :-) Right regarding your perception, wrong regarding relative power. 1960s clients were small compared to today's small clients. However, 60s server were also small in relation to cloud servers. Today our small clients provide browsers and stuff like that but they aren't useful without servers. They can't run top-notch 3D games without high-end servers. The third wave of C/S will be in the area of A.I. with (small) clients which will possibly as powerful as today's cloud servers.


Trends:

Ajax, Long polling, WebSockets

jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React

Javascript debugging tools, profiling, 60fps, responsive pages, AMP

RSS, Web Push, WebRTC

HTTP Auth, Cookies, oAuth, new social protocols

Perl, Java, PHP, Node.js, Go


Thanks! You caught some ones I missed, so here are some edits and responses to your list:

1. I didn't mean to put "container" in front of quantum computing.

2. I didn't mention history of certs or encryption, as I think that security is often a feeling rather than a reality. I'm not sure that "HTTPS everywhere" plugin and then movement in early 2000s was innovation more than it was tightening up security after Firesheep.

3. Yes, I should've included WebSockets over long polling in Early 2010s.

4. Yes, RSS mattered- 1999/Early 2000s.

5. I probably shouldn't have mentioned OOP, etc. as I didn't mean for methodology to matter, since it doesn't matter to users. Similarly debugging tools don't matter for innovations that users see.

6. Yes, fluid layout, grid layout, and responsive design in Late 2000s (though Audi had responsive in 2001).

7. jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React - none of the implementation details of these things matter. The only things that matter are how things appear to the user- like whether a page has a clunky refresh or smooth transition and whether things update automatically when they are changed elsewhere. Also, Applets, Flash, frames, and the move to JS all screwed the visually impaired.

8. Cookies mattered because they were used to track users in ways they didn't want to be tracked. People disabling JS for a while mattered. US announcing Java was insecure mattered. Flash and Flash being abandoned mattered.

9. Forgot to mention frames in Mid/Late 1990s.

10. As you mentioned oAuth, SSO becoming a big deal in the Late 2000s with Facebook, Google.

And I should have mentioned blogging, microblogging, move of much of the web to Facebook, Tor/private web, peer sharing and impact on music industry as well as impact on the value of well-created data and applications vs. the value of constantly creating data and making data available and clear.

Despite all of the things I missed, the point is that the things that really matter aren't new libraries and frameworks- they are technology and how the world uses it. If a user can't tell a positive difference between something you were doing 5 years ago and today, then you didn't really innovate.


Not exactly "new" but: Being able to spend ~$30 on a system powerful enough do everything faster than you could years ago for thousands of yesterday dollars. The feeling of not being limited by computing power for every thing is incredible. Vice versa, feeling like you aren't imaginative enough to utilize modern day power available, it's a great time to push your mind harder to take advantage of it.


Anything to do with computer vision controlled navigation is a good source of new challenges and new discoveries / inventions.


It's not long before you start thinking "f*ck the current new thing, what's the next new thing going to be? I'll try and spot that then jump onto the band wagon earlier"

The new Devs are basically doing this by default. They are early adopters on the hype cycle and so leverage it for renum since it disrupts supply every time.

Play the game old bean, it hasn't changed :)


MVC is from the 70s.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: