When learning something new, I find that this group implemented with $NEW_THING in a completely different way than that group did an implementation with the exact same $NEW_THING. I have a harder time understanding how the project is organized than I do grokking $NEW_THING. And when I ask "why not $THAT_THING instead?" I get blank stares, and amazement that someone solved the problem a decade ago.
Sure, I've seen a few paradigm shifts, but I don't think I've seen anything Truly New in Quite Some Time. Lots of NIH; lots of not knowing the existing landscape.
All that said, I hope we find tools that work for people. Remix however you need to for your own edification. Share your work. Let others contribute. Maybe one day we'll stumble on some Holy Grail that helps us understand sooner, be more productive, and generally make the world a better place.
But nothing's gonna leave me behind until I'm dead.
Yet some things have barely changed. Beyond 25 years ago it was Solaris or HPUX, using C, talking TCP/IP and mucking about with SQL. Some of us were sad at SVR4 as we still preferred the BSD view of things. Running Unix and 10 terminals on some 680x0, learning to get efficient in vi. Some preferred emacs. I was excited for the future, especially of hardware and the OS, as I'd seen it on the Amiga. I'd seen some of the crazy ideas being thought about, like we might have this mad thing called VR. That looks interesting. It's insanely expensive and made me feel a bit seasick, but maybe it'll be the cool fad of of 1990. We'll see.
Fast forward to today. Oh. Well there's a lot more switches, and security has changed some things, desktops have pushed ideas because they think they're phones, but I'd never have conceived how similar so much would be. BSD isn't forgotten, mysql grew up and then some. Emacs and vi are still be being discussed. Why aren't old programmers more popular?
I'm still interested in hip new things. I'm losing some enthusiasm though. There's an increasing number I haven't got to, as there's just too many of them. Too much is just fashion. So often I'm struck by how $NEW_THING gives something, but takes something or adds needless complexity, yet manages to be a variation on an aging theme. In frameworks the best $NEW_THING fashion changes hourly.
Leaves me rather disappointed, but I'm still looking. And hoping.
Cost and reduced willingness to work the death march hours that many places want the new hires to work/are willing to work to "prove" their worth.
But that is only for the 'abstractions' they implement; the details are another thing.
I feel the only thing that really changed is that now big companies with PR / marketing departments are on the case while before it was more a geek thing. There is nothing fundamentally different in new frameworks (heck, most are them are just rehashes of others that are many years old with very small improvements), but suddenly the echo chamber makes it out to be vital to your career.
Like React. It's nice but come on... Every frontend programmer I know is fretting that they are not into it enough because you will die or something if you are not. That seems good marketing by Facebook to get such a solid drive behind it so quickly. But it's not needed for anything; you can still just use what you used before and often faster/better (because you are good at what you did for many years right?) and you don't have to bother with learning the latest thing all the time, while, with React, because you are drinking the cool aid, you have to update/refactor/redo stuff often because of changing libraries and new insights. It would make you stressed if think you feel you have to keep up with all of that.
Also, some tools seem to have just been made to look smart. Really, something like Webpack doesn't have to be that obfuscated. It really looks like it has been made like that just to say; 'ah, you just do not get it'. I see people (20somethings included) really sweating in their keyboard when trying to figure out more than the basics of that thing; so why are people using it? Why is it actually popular?
Similarly on Flux: I implemented an immutable store where only incoming messages change state and changes are distributed to widgets in 2005.
Around the same time we started using feature flags in our application.
We also did CI around 2000. Didn't call it that of course.
Never did it myself but heard of TDD long before it became a thing. Again under a different name.
Erlang. Superior technology that forms the heart of new kid on the block Elixir. Built long before the Bay Area CS grads re-invented ageism and claimed young people to 'be just smarter'.
Sure some things are new, but there's just no respect for experience any more.
It's not all that innovative, it's just very easy to use, and very well put together.
IMHO If go were invented by anybody other than Google it wouldn't have enjoyed anywhere near the success that it did.
I'm usually extra suspicious of "hot" open source technologies with a marketing budget and/or tech behemoth behind them. It's not that they can't be fundamental steps ahead it's that the marketing can end up causing undeserved popularity.
And the point of Go is that it isn't trying to be "fundamental steps ahead". If anything, Go is an opinionated statement that programming was better in 1992 than it is today, so let's just fix the annoying shit from 1992 and pretend the word "enterprise" had continued to just mean the ship from Star Trek.
I think it's not just Google, but the cult of Rob Pike, Ken Thompson, etc. I appreciate their work on so many things, so I don't mean any ill-will. I was a user of ACME editor and Plan9, but just like those projects, Go has so many polish issues in critical areas. The package management alone is ridiculous. The common theme in all these projects is for every good idea, there is an order of magnitude more critical mistakes or divisive ideas that cripple the end result in some way.
Go is alright, but it's not really that interesting to me or a monumental leap. There are a few ideas that others could cherry-pick and put into much better designed languages to take it all a step further in my opinion.
I'm a 50+ developer who has increasingly felt that "life is too short" to spend all my time trying to keep on the latest fads.
That being said, I have found myself doing more "fun" programming in Go lately, precisely because, as you said, they hit the mark with how simple and clean the language is, with little of the baggage that comes along with a lot of languages and frameworks (I'm looking at you Rails and React) today.
write once, run anyone
Many people focus on JSX or virtual DOM and muse about how that just duplicates the browser-- what's the point? But that isn't the point. It's componentization and encapsulation for large teams at scale, and for Facebook it works, it makes them more successful at pulling together disparate modules on a single page than their competitors, so there is some meat to this $NEWSHINY.
Now, if the browser itself had been properly designed years ago to be componentized and encapsulated, we'd have those features without inventing a new DOM or creating a new tagging language or isolating CSS. There is a web components standard that offers much of the same thing... It came out around the same time as React, but still isn't widely used outside of Google Polymer. Truth is these were developed in parallel before they knew of each other.
Anyway, React makes a lot more sense if you think of it as rewritting the browser from outside the browser. In the old days the stack that would have been hard to impossible-- you would have simply started your own browser. And people would have shamed you for not following existing standards, so you would have petitioned the W3C or served on committees to bring your real-world cases for years, being mostly ignored.
Now, devs have the ability to change the browser quickly and find things that work (and also create chaos and suffering).
Meh. Kids. ;)
Every framework is about componentization and encapsulation.
You could take React out of your post and replace it with any framework name in the last 40 years and it would have made 'sense' at the time.
Like the author said, right now a veteran looks at react and sees the mistakes of 15 years ago when we mixed code and presentation.
An insult to our Master, Satan would never create something so awful :P
> Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it.
There's my problem right away. I can't just "take a weekend" to learn some new shiny thing. I have a partner and children who I want to be with at the weekend. And I'd rather go climbing or hiking or even just go out on my bike, than learn another damn api. Twenty years ago I had evenings and weekends to burn. Now I don't.
If so many "modern" languages still copy features from Lisp (hello C++) then why not use the real thing?
All those nice "new" features which Python and C++ are praised for (lambdas, closures, list comprehensions) have been available for almost 50 years. Lisp was way ahead of its time. It just lacked the hardware power which we have today. It is still ahead of our time. Consider Lisp macros, Genera and MCCLIM. MCCLIM is an interactive GUI for shells which has been neglected for decades. It is being revived right now to make it available for modern Lisp distributions. Modern Lisp just lacks one thing to be the real deal: a native Lisp Machine.
Because if it hasn't "succeeded", for suitable definitions of "succeeded" in 50 years, as an old fart approaching 40 myself my assumption is that there is a good reason. I was much more willing to believe the "everybody else is just stupid and can't see the obvious brilliance" reason when I was younger, but as I've learned more, I've learned just how many factors go into a language being successful. Lisp may have all the features, but there are also good reasons why it has never really quite taken off.
That said, it is also true that using languages that bodge Lisp-like features on to the side after several decades of usage is generally inferior to languages that start with them. It's one of the reasons I'm still a big proponent of people making new languages even though we've got a lot of good ones. (I just want them to be new languages somehow; a lot of people just reskin an existing language with a slightly different syntax and then wonder why nobody uses it.)
But they've all been listed before, and I don't expect me listing them here will change anything, so I'll skip it here.
(I did a quick check on Google Trends; Clojure seems to not be growing. It's not shrinking, but it's not growing. That was Lisp's best chance for growth I've seen in a long time, and that window has probably now closed.)
Clojure has a different problem. It is based on the JVM infrastructure which was (imho) an unfortunate decision. Access to Java features is nice but dealing with Java stack traces is not fun. Also the usual startup time of clojure apps in the range of seconds is not acceptable (Tcl/Tk apps start in a fraction of a second). AFAIK clojure is also not suitable for mobile app development. The clojure developers should have used their own VM, or they should have provided a Clojure/Lisp compiler for native compilation. LuaJit has demonstrated how incredibly fast a suitable VM can be.
Clojure is quite wisely based on the JVM because the idea was not to create a Lisp replacement or a new Lisp, but rather a practical Lisp. Some of the historical problems with Lisp and many other languages like Smalltalk were related to specialized hardware, development environments, and/or ecosystems. Clojure did away with most of these concerns by attaching itself to one of the largest existing environments.
Using the JVM was a wise decision that has/had many advantages not limited to:
- Ability to leverage an already huge ecosystem of libraries, tools, servers, etc. that are well-tested
- Already battle-tested and working package system - this is severely underestimated by some popular languages
- Justifying its existence as a tool along side other JVM languages within an organization, not a full replacement
- Existing, highly optimized JIT
- Reasonably fast, nearly for free
Originally there were some plans to expand more to other environments, for example the CLR, but the JVM got the most attention and in the end this was practical.
As for some of the real drawbacks:
- Clojure alienates Lisps zealots and people who could never grasp Lisp. IMO, this is a stupid person/psycho filter so I don't see it as a drawback but it's worth noting.
- Garbage. This is an issue of a lot of things on the JVM and Clojure is no exception. You can work around this somewhat with specific coding practices if you need to, but yeah, Clojure isn't going to be good if you can't fathom an eventual garbage collection cycle.
- No TCO, which is mainly a JVM issue if I remember right. This would be great, but it's a tradeoff and Clojure has negotiated this somewhat by providing what I feel is more readable code than when I worked in certain Lisps.
- Some legacy baggage in the standard lib, for example Clojure.zip. Recently they've been more brutal about what can and cannot go into standard libs. Every language though suffers from this a bit.
Regarding developing its own VM, I think you again miss the point about practicality. If Clojure did this, it would have been even more years before its release. Moreover, comparing to Lua is a bad example as it is a very different language (yes, I used Lua professionally). Lua achieves a lot by really keeping it simple, and while there is merit to that, Lua leaves a ton to be desired which I won't get off-topic about here.
So Clojure could work better in its own VM, but then you'd lose the JVM ecosystem along with many other things. I personally would rather have the ability from the beginning to reach from a huge amount of libraries rather than have nothing but what other people writing the language provide or via crude things like calling back into C. There are many talks about all of this, many from Rich Hickey himself. I think you really missed the point of Clojure and I am more of the mindset that I am glad it exists and not in an ever state of flux so that I can use it today, get things done, and not have it relegated to some research language I could never justify in a workplace. And no, I am not a Clojure zealot, I used about a dozen languages in any given year depending on my project and interests. There's a lot I prefer in Lisp over Clojure, but I see Clojure as taking some lessons from various Lisps rather than trying to be the one true Lisp.
My humble two cents to the Clojure team:
1) You should implement a cache mechanism ("save-image") for native code so that at least the annoying startup time of clojure apps is gone. I wonder why Java doesn't support native caches to this day.
2) The weird Java stack trace problem could be solved by providing an individual stack tracer which is close to the source code. I know that this is not possible for Java libraries but at least clojure stack traces should be presented in a more convenient manner.
I wouldn't call Clojure a Lisp; it's a Lisp-like language with some interesting ideas, but not really Lisp at all.
As to why Lisp has failed to take hold where other languages have succeeded, I'll use a G.K. Chesterton quote: 'Christianity has not been tried and found wanting; it has been found difficult and not tried.' It's not that Lisp itself is difficult; it's that Lisp is different from lesser languages, and people have difficulty learning something different from what they're used to. Lisp has not been tried and found wanting; it has been found different and not tried.
So … try it! You may be surprised.
"In 1972, a British scientist sounded the alarm that sugar – and not fat – was the greatest danger to our health. But his findings were ridiculed and his reputation ruined. How did the world’s top nutrition scientists get it so wrong for so long?"
Without advancing an argument for Lisp, I'd just note that it's possible - in any discipline - for something to be decades (or more) ahead of its time.
Of course, that wouldn't mean that everything old is avant-garde just because it said it was. But, if people are still or suddenly looking at it, and looking at it hard, I would at least take that as a signal to give it some due consideration. It's apparently fighting age, and winning to some degree.
I can't stand the mentality of old = bad or not popular = bad. Most of the time, like anything in life, things don't "win" for being the "best" technically or for the most merit. There are usually many factors at play, and marketing, misinformation, stupid people, timing, and more have a huge role. While some of these factors may be good reasons, for programming, many of them are rubbish. We go down the bad paths more than the good ones in computer science it seems.
There are countless technologies that were way ahead of their time and for various reasons didn't end up market leaders today. Lisp is one, Smalltalk is another. Even Object Databases and various permutations of "NoSQL" databases existed for a long time. CSP is yet another that is being "rediscovered" via Go, Clojure, and some other languages. The list goes on for hours. Whether it was/is hardware or many other reasons, these things didn't "succeed." Despite that, it doesn't make a technology or idea useless if it's a good fit for the task, nor does it make it worth ignoring, not learning, or improving.
If there's one thing I've noticed from my considerable years in software dev, it is that most people are wrong most of the time about most things. Look at science - the field has historically been full of naysayers, people who cling to the past for their own agendas, saboteurs, fools, politically or religiously motivated morons, and so on. If we just always went with what the masses say or for that matter, even the so-called "experts," we wouldn't have any scientific advancement at all. Looking back in science, we can also see that many people had discovered or nearly discovered quite a lot. Although somehow we eventually unearthed some of these discoveries, the advancements they created never came in their lifetime or even century or eon.
There's a lot wrong with modern Lisp today, but most of the foundations are solid and it gets pretty tiring of people using Lisp like a pejorative, especially if it's because they lack knowledge about it or do not understand it, or perhaps worse, have never used it.
The smaller and more focused feature set is probably one of the reasons its popular. There's also ergonomic reasons, which have a big effect on whether people who aren't initially fully committed tune out or blow up early on in their encounter with the language.
I wouldn't say Lisp "didn't stick"; it's been in continuous use for half a century. Writing Lisp code would be anything but "sudden".
Of course we can say the same thing about COBOL, but that seems mostly due to the inertia of legacy applications; its target demographic of business applications now favours languages like Java.
On the other hand, new projects are being written in Lisps, and it's still spawning new languages (e.g. Common Lisp, Clojure, Kernel, all the Schemes and their descendents, etc.). This seems to indicate that people want to use/extend Lisp, rather than having to use it (although there are certainly legacy Lisp applications out there, which may or may not be horrendous to maintain).
Also, as Alan Kay points out, Lisps frequently "eat their children": someone invents a "better language", then someone else figures out a way to do the same thing as a library in an existing Lisp. This means that very old dialects/implementations may be perfectly capable of using paradigms/fads/etc. which were only invented/popularised much later, e.g CLOS for OOP or call/cc, shift/reset, etc. for coroutines/async/promises/etc.
In contrast, those languages which truly "didn't stick" are seldom heard of, outside the "inspired by" sections on Wikipedia. Many of are uninteresting, such as machine- or company-specific dialects, which died along with their hardware/sponsor. Others can be very informative, especially regarding "flavours of the month" and "paradigm shifts":
- Simula (arguably the origin of OOP as found in C++, Java, C#, etc.)
- ALGOL (the archetype of lexically-scoped procedural languages, like Pascal, C, Go, etc.). Actually, I still see "idealised Algol" discussed in cutting-edge programming language research, so maybe it still has some life!
- SNOBOL, which enjoyed great success in the world of string manipulation, and now seems to be completely replaced by awk/sed/perl/tcl/etc.
- MUMPS, which integrated a key/value database into the language. Still used, but seems to be for legacy reasons like COBOL (e.g. see http://thedailywtf.com/articles/A_Case_of_the_MUMPS )
- Refal, which relies on pattern-matching for evaluation (now widespread in the MLs (Standard ML, OCaml, F#, Coq, etc.) and Haskell-likes (Haskell, Clean, Miranda, Curry, Agda, Idris, etc.). Also notable for using supercompilation.
- ABC, the prototypical 'scripting language', and a direct influence on Python.
- FP, which emphasised higher-order programming.
- Joy, a pure functional language based on stacks and composition, rather than lambda calculus.
- REXX, widely used as a "glue" language; I think AppleScript has comparable use-cases these days (I don't know, I've never used an Apple OS). Seems to be supplanted by shells and embedded scripting languages (e.g. Python, Lua, JS)
- Self, famous for prototypical inheritance and the origin of the "morphic" GUI.
- Dylan, effectively a Lisp without the s-expressions. Created by Apple, but quickly abandoned.
- Prolog, a logic language based on unification. Still has some users, but didn't take over the world as some thought it would (e.g. the whole "fifth generation" hype in the 80s).
More about Prolog: Prolog, like Refal, had pattern-matching long before the modern functional languages. It's an extremely useful feature. Prolog is also the best database query language ever invented, which is why systems like Datalog and Datomic borrow from it heavily.
As I've said before on this subject, if you're careful what shiny new thing you bother to learn, and restrict yourself to doing it every couple of years rather than every weekend, I find the family and kids will never notice.
And anyway, at our age time flies so damn fast the kids'll be grown up by next weekend anyway :)
> Your use of the word "burn" concerns me. I'd prefer the word "invest".
I agree, but... sometimes it really is "burn". I don't keep up with front-end tech these days. If/when I need to work with it I'll learn it, and not before. If I spent time getting up to a real working knowledge now, there's every chance that in, say, 4 years when I needed to work in that space that I'd have to start learning from scratch again rather than brush up.
I do spend time learning new (to me) things, but I'm very selective. I devote most of my time to learning things that will give me alternate ways of doing things, rather than learning every nut and bolt of the latest framework.
By time to "burn" I was referring to all the time I had in my twenties: when spending an evening or a day or a weekend doing nothing in particular didn't seem like much of a waste.
Whereas now, "me-time" comes in much smaller units (maybe an hour after the kids are in bed and before I get tired). And I'm very careful to invest it: I recently spent eight months studying UX on Coursera, and am getting back into electronics and Arduino after a break, for example. I just don't have big blocks of time any more. Certainly not whole weekends.
To be fair, I don't have kids. I'm sure that's a huge factor.
Some times, my boss asks me about a new fancy tech. "I'll look into it" means I'll take a few hours of my time to give a good appraisal of it.
However, I never find myself confronted with things that are not genuinely new and interesting to learn and work on. A lot of these things are not new at all, but they are new to me: statistics, linear algebra, machine learning, compiler construction, PL research, model verification, graph algorithms, calculus, engineering modeling, vectorized programming, GPU programming, geometrical computation, and it just goes on and on. In each of those, you will have the fads of the day, the current hot framework, the second current hot framework, the old framework that works better. At the end of the day, I get the textbook, look up university courses on youtube, pick whatever framework shows up first in google, and spend time on the fundamentals. As a crude example, I may have to look up how to do a dot product in numpy / matlab / mathematica / c++ / R every second day, and when i learn something most of my programming is SO-driven, but I also can perfectly write a dot-product in clojure/factor/elixir/arm assembly if you asked me to, and then do a vectorized dot-product in CUDA/Neon SIMD/VHDL because I spent time on the fundamentals. The best thing that happens is when you start to see how one technique appears in so many different fields (for example SVD).
Nothing is new, but most of it is new to me.
After that I do spend a significant amount of time researching my tools (IDE, supporting apps, build systems, frameworks, compilers, programming languages), but that's the craftmanship part of it, and is kind of like doing the dishes and going to the farmer's market to have a nice kitchen to cook in and great ingredients to cook with.
Because the profession is so heavily skewed towards the young and self-taught, people don't seem to know about the solutions of a decade ago and their merits and demerits. This is partly why software componentisation as "parts catalog" has never really taken off. It's easier to reimplement or independently reinvent something than it is to find the right existing solution and learn it.
It's as if it were easier to turn every bolt you needed on a lathe than to go to the shop for them.
(The closest might be npm, but then we see what an engineer would call an excessively large bill-of-materials, as trivial projects pull in thousands of dependencies)
Shall I list the componentization tech of yore? CORBA, OpenDoc, Java Server Faces, COM, DCOM (oh god, CORBA again!), SGML (the original component framework), XML zoo (oh god, CORBA a third time).. JSON zoo, (a flippin' fourth time, are you kidding?!)
componentization is something we're still figuring out. Alan Kay and SmallTalk were the closest to get to it (see previous comment about practicality though) and the mainstream just now is starting to think of JS and Ruby as "novel". NPM? Please.
We have a long way to go before componentization actually works. So yes, I guess I agree that it's simply easier to reinvent things to a specific context than solve the problem of sharing code context-free.
I'll join you in a glass of "oh god, CORBA!" At least one good thing about the web is that people have given up hoping that RPC could be transparent.
I'd also note that despite how unfashionable it is and was, COM was a remarkably successful component framework. A lot of Windows apps use COM heavily and not because they were required to do so - they componentized themselves using COM because they wanted to and it delivered real value to them.
In addition, I'm not sure how you are defining "component", but the term is rather similar to library, and modern apps frequently pull in enormous quantities of libraries. It worked out OK, actually.
Well, that and the fact that OO turned out to not be a very good mechanism for building the parts catalog on. In the end I'd judge it as only slightly more successful than procedural programming on that front.
For instance, Haskell's "part's catalog" is somewhat smaller than other languages. But the parts do what they say they will, and generally go together pretty well once the community is done chewing on them and fixing them up. (Here I mean fundamental tools like parsers or text template systems, not merely "libraries to access this API" or "bindings to this particular library.) All those restrictions that go into Haskell are there for a reason.
There is a iternal itteration between reading disgusting Documentation (nonexistant one) and finding the hidden shortconmings of existing solutions - and just doing the only "open source" that is accepted in every company- rewrite it yourself.
eternal war between "avoid reinventing the wheel" and ahistorical "not invented here"
It's hard to avoid reinventing the wheel if all you know is what was invented 'here' and 'recently'.
The webcrap world is mostly churn, not improvement. Each "framework" puts developers on a treadmill keeping up with the changes. This provides steady employment for many people, but hasn't improved web sites much.
An incredible amount of effort seems to go into packaging, build, and container systems, yet most of them suck. They're complex because they contain so many parts, but what they do isn't that interesting.
Stuff we should have had by now but don't: a secure microkernel OS in wide use. Program verification that's usable by non-PhDs. An end to buffer overflows.
IMO Machine learning mostly doesn't work (yet) with a couple exceptions where tremendous amounts of energy and talent have made that happen. For example, image processing with conv nets is really cool, but the data sets have been "dogs all the way down" until very recently. And for the past few years, just getting new data and tuning AlexNet on a bunch more categories was an instant $30-$50M acqui-hire. Beyond a few categories, its output amuses and annoys me roughly equally.
But the real problem with ML algorithms IMO is that they cannot be deployed effectively as black boxes yet. The algorithms still require insanely finicky human tuning and parameter optimization to get a useful result out of any de novo data set. And such results frequently don't reproduce when the underlying code isn't given away on github. Finally, since the talent that can do that is literally worth more than its weight in gold in acqui-hire lucky bucks, it doesn't seem like there's a solution anytime soon.
Voice input? You gotta be kidding me. IMO it works just well enough to enter the uncanny valley level of deceiving the user into trusting it and then fails sufficiently often to trigger unending rage. Baidu's TypeTalk is a bit better than the godawful default Google Keyboard though so maybe there's hope.
GPUs? Yep, NVIDIA was a decade ahead of everyone by optimizing strong-scaling over weak-scaling (Sorry Intel, you suck here. AMD? Get in the ring, you'll do better than you think). Chance favored the prepared processor here when Deep Learning exploded. But now NVIDIA is betting the entire farm on it, and betting the entire farm on anything IMO is a bad idea. A $40B+ market is more than enough to summon a competent competitor into existence (But seriously Intel, you need an intervention at this point IMO).
Machines with lots of CPUs: Well, um, I really really wish they had better single-core CPU performance because that ties in with working with GPUs. Sadly, I've seen sub-$500 consumer CPUs destroy $5000+ Xeon CPUs as GPU managers because of this, sigh.
Container systems? Oh god make it stop. IMO they mostly (try to) solve a wacky dependency problem that should never have been allowed to exist in the first place.
The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop.
One of my favorite features now on my iPhone is "Reader View". Have a new iPhone 7, which is very fast, but some pages still take too long to load, and when it finally does, the content I want to read is obscured with something I have to click to go away, and then a good percentage of the screen is still taken up by headers and footers that don't go away. The Reader View loads faster, and generally has much better font and layout for actually reading the content I'm interested in.
All of which is to say, the sole purpose of what a lot of web developers are working on today seems to serve no purpose other than to annoy people.
Just a week ago I made the startling discovery that FB's mobile web app it's actually worse than a lot of websites I used to visit at the end of the 90s - early 2000 on Netscape 4.
Case in point, their textarea thingie for when you're writing a message to someone: after each letter push there is an actual, very discernible lag until said letter shows up in the textarea field. So much so that there are cases when I'd finished typing an entire word before it shows up on my mobile phone's screen. I suspect it's something related to the JS framework they're using (a plain HTML textarea field with no JS attached works just fine on other websites, like on HN), maybe they're doing an AJAX call after each key-press (?!), I wouldn't know. Whatever it is, it makes their web messenger almost unusable. (if it matters, I'm using an iPhone4).
My guess is you haven't actually built a real web application. The progress we've made in 20 years is astounding.
Don't get me wrong, it is amazing progress given the technology you have to fight. But in absolute terms it's not that great.
The great thing about native development is the long-term stability of all the components. I have access to a broad range of good-looking UI components with simple layout mechanisms, a small but robust SQL database, a rock-solid IDE and build tools - all of which haven't changed much in the past decade. Plus super-fast performance and a great range of libraries.
To put it in terms of the article: the half-life of native desktop knowledge is much longer than 10 years. Almost everything I learnt about native programming 10 years ago is relevant now.
Unfortunately, the atrocious deployment situation for native apps is also unchanged in 10 years (ie. "This program may harm your computer - are you really sure you want to run it?"). But on the other hand having a native app has allowed me to implement features like "offline mode" and "end-to-end encryption" that would be difficult or impossible in a web app. This has given my business a distinct advantage over web-based alternatives.
Everything else seems like stuff that makes things better for the developers but not much visible benefit to the user. I can understand where a comment about the web not being much better would come from, on the scale of a decade.
Go back 20 years, and you're talking about a completely different world; frames, forms, webrings, and "Best viewed with IE 4.0". But if '96 to '06 was a series of monumental leaps, '06 to '16 looks like some tentative hops.
Few websites use them effectively yet, at least in a way that benefits the consumer (several are using them to benefit marketers). This could be because developers don't know about them, consumers don't care about them, or perhaps just not enough time has passed. XHR was introduced in 1999, after all, but it took until 2004 before anyone besides Microsoft noticed it.
I think many of us underestimate what was possible to do in browsers. What has happened is that these features have been democratised: what took them months to build I can now pull of using WebRTC in the space of a weekend.
The thing is - the polish matters. You can't do viable consumer apps until they actually work like the consumer wants, which is often decades after the technology preview. You could emulate websockets using IFRAMES, script tags, and long-polling back in the late 90s, but a.) you'd get cut off with the slightest network glitch and b.) you'd spend so much time setting up your transport layer that you go bankrupt before writing the app.
They're taken for granted in native application programming, yes, but the big advantage of browsers is the zero-cost, on-demand install. This is a bit less of an advantage than it was in 2003 (where new Windows & Flash security vulnerabilities were discovered almost every day, and nobody dared install software less their computer be pwned), but there are still many applications where getting a user to install an app is a non-starter.
Ditto. I like having a copy of a program that no one but me has access to modify, and I like that I don't have to rely on my ISP to use my computer. If I like a program, I don't want it to change until I choose to change it. I don't want to be A/B tested, marketed to, etc. I'd rather buy a license and be happy =)
And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now. In fact, I'd say that "open link in new tab" is dead on most of the new web "applications", I'm actually surprised when it works. The same goes for the "go back with backspace" thingie, which Google just killed for no good reason.
Copy-paste is also starting to become a nuisance on lots of websites. sometimes when I try to do it a shitty pop-up shows up with "post this text you've just copied to FB/Twitter" or the app just re-directs me to somewhere else. It reminds me of the Flash-based websites from around 2002-2003, when they were all the rage.
Use the basic HTML version. It's worse in a few ways but better in most others. Including speed.
Back with backspace!
On top of that I actually have no "CMD" key on my keyboard, I have a "Ctrl" key which I assume is the same as "CMD" (I also have a key with the Windows logo which I had assumed it was the CMD key, I was wrong). KISS has gone out of the window long ago.
The outlook Web app on the other hand sometimes blocks backspace from deleting a character, presumably to stop you inadvertently jumping back from inside a text field. This is only "on" when inside a text field in the first place, so if MS could do it, I don't see why Google's better engineers couldn't.
> The idea o a web-based office suite on the web would have been laughable 20 years ago.
What's laughable is how much effort has gone into rebuilding something in this platform with a result that is nearly the same (but worse) as what existed 20 years ago.
x86 CPUs, web browsers, popular operating systems, and so on are all examples of this problem. At some point I really wish we could do something different, practical reasons be damned. It's sad that as many cool, "new" things we have, some of the core, basic ideas and goals are implemented so poorly and we are effectively stuck with them. This is one reason I hate that almost all software and hardware projects are so rushed, and that standards bodies are the opposite, but with only the bad things carried over. The cost of our bad decisions often weighs for much longer than anyone could imagine, just ask anyone who has designed a programming language or something major in software ecosystems.
As much as I enjoy all the new, shiny stuff, it makes me sad thinking about BBSs and old protocols Gopher that represented the old guard, alternate routes, and the fact that we really haven't come that far. Overall things of course are a lot better, but in many ways I often feel like we're treating the symptoms and not the cause or just going around in circles.
I could go on, but the rant would be novel length.
I don't see any opportunity in the future for any person or company to take all the lessons learned in the last 50 years and build something new that takes it into account.
Same with browsers; It's only now that we kinda know what a browser really needs to be but there's no way to start from scratch with all those lessons and build a new kind of web browser. There is always going to need to be what they currently are and build on what was already done.
I understand why, but it's still kind of sad.
It's almost like saying Microsoft Word is cross-platform because I can RDP into a Windows machine from Linux. It's not really part of Linux, it needs a client to access an application running on a remote server. The only difference is how complex the client is.
The flip side of that equation is that poor practical choices never improve because there will only be more platforms to target.
If we made development decisions based on technological constraints alone, how is it supposed to improve?
Your whole multiplatform thing is disingenuous because back then there really was only 1 platform. Windows. So you've conveniently forgotten about the lotus suite, etc.
And yet today it is still harder to make a decent web app than it was in VB6 15 years ago.
the that time you had to learn a lot of new stuff in a short time too.
I spent four hours just to find that the latest and greatest express don't have the simple global site protection with a password (that it had in version 3) like with .htaccess - it is just not possible anymore. There were no elegant solutions.
There may be some marginal progress while doing complex stuff, but doing the simple is harder and harder with each passing year.
Here is a simple question - is making working UI now easier than with MFC circa 1999. If the answer is no- than that progress is imaginary.
Every new thing is strongly opinionated, doesn't work and relies on magic. Debugging is nightmare and we have layers upon layers of abstractions.
Please for the love of Cthulhu - if any of you googlers, facebookers, twitterers read this - next time you start doing the next big thing let these 3 be your guiding lights - the code and flow must be easy to understand, it should be easy to debug, it should be easy to pinpoint where in the code something happens - all of the frameworks' benefits become marginal at best if I have to spend 4 hours finding the exact event chain and context and place in the framework that fires that ajax request.
iCloud's office apps use Sproutcore, which eventually forked into Ember (though the original Sproutcore project still exists).
Hopefully Web Assembly will really show the improvement we've made.
Many of the algorithms we're implementing (or at least considering) only exist in recently published papers, or sit behind unpublished APIs. There have been huge improvements in graph route-finding algorithms in the last decade, so much of it is new, interesting and it's far from run-of-the-mill implementation.
I'm 38 - I spent the first many years of my career doing CRUD development, first in Perl (late 90's), then Java/PHP (2000's). I skipped the JS craze, and now I'm enjoying my work more than ever improving my C++ skills (last time I touched C++ was 98, modern C++14 is a huge improvement) and working on backend, specialized algorithm implementation. It's great!
Experience is the best teacher. Kids don't listen to their parents, new developers don't listen to the greybeards until it's too late. This is the way things are :-)
Sounds great - how does one go about finding that sort of work in the industry?
Late 1970s: microcomputers, explosion of BASIC and ASM development
Early 1980s: proliferation of modems, BBS's become big, Compuserve becomes big- people able to read news online and chat in real-time (but not popular like much later). software stores, software pirating, computer clubs, widespread use of Apple II's in schools. Microsoft Flight Simulator released in 1982 is first super-popular 3D simulation software.
Mid-1980s: GUIs- Macintosh 1984 based on ideas from Xerox PARC.
Late 1980s: Graphics had more colors, more resolution, faster processors. So- cooler games. File servers. 1987 GIF format, 1989 GIF format supporting animation, transparency, metadata- not that popularly used though- was a compuserve thing.
Early 1990s: Internet, realistic quality pictures, webpages/browsing, global file servers. Mosaic web browser. Most pages involved horizontal rule dividers that might be rainbow animated GIFs. Bulleted lists. Under construction GIFs were popular. Linux. JPEG format. Netscape. Blink tags.
Late 1990s: ActionScript. Google search. CSS. Extreme programming. Scrum. JSP. Some using ORM via Toplink. Java session vs. entity beans. IIS. Java multithreading. Amazon gets patent for 1-click ordering. AOL instant messenger. PhP.
Early 2000s: ASP. .Net/C#. Hibernate ORM (free). Choosing between different Java container servers.
Mid 2000s: Use CSS not tables. Rails.
Late 2000s: SPA and automatic updating of content in background via Ajax. Mobile apps. Mobile web. Scala. Cloud computing start. VMs. Streaming video mature. Configuration management via Chef/Puppet.
Early 2010s: Cloud computing standard. Container virtualization. Video conferencing is normal- not just big company office thing. Orchestration of VMs more normal.
Mid 2010s: Container Quantum computing starts at a basic level (not important yet).
Note how I can't really thing of anything recently that has to do with new things in webdev.
> Early 2010s: Cloud computing
1960s: Client/Server Architecture. Big servers and small clients.
> Mid 2010s: Quantum computing
before 1950s: Analog Computers
There is nothing new under the sun. Analog computers passed away because they were not usable. Ok, quantum computing may be different but their practical use is also questionable.
This is right and wrong at the same time. Right, because the Cloud reuses some basic concepts from the mainframe era (e.g., virtualization), which had been neglected for some time. Wrong, because writing your application to run efficiently on a mainframe is totally different from writing your application to run efficiently on Cloud infrastructure. Also, there is no thing such as small clients anymore, mobile apps and Web frontends are nowadays as complex as the usual 1980s fat-client software.
IMHO this is a very good example for technology not making circles, but evolving in spirals.
This is also right and wrong :-) Right regarding your perception, wrong regarding relative power. 1960s clients were small compared to today's small clients. However, 60s server were also small in relation to cloud servers. Today our small clients provide browsers and stuff like that but they aren't useful without servers. They can't run top-notch 3D games without high-end servers. The third wave of C/S will be in the area of A.I. with (small) clients which will possibly as powerful as today's cloud servers.
Ajax, Long polling, WebSockets
jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React
RSS, Web Push, WebRTC
HTTP Auth, Cookies, oAuth, new social protocols
Perl, Java, PHP, Node.js, Go
1. I didn't mean to put "container" in front of quantum computing.
2. I didn't mention history of certs or encryption, as I think that security is often a feeling rather than a reality. I'm not sure that "HTTPS everywhere" plugin and then movement in early 2000s was innovation more than it was tightening up security after Firesheep.
3. Yes, I should've included WebSockets over long polling in Early 2010s.
4. Yes, RSS mattered- 1999/Early 2000s.
5. I probably shouldn't have mentioned OOP, etc. as I didn't mean for methodology to matter, since it doesn't matter to users. Similarly debugging tools don't matter for innovations that users see.
6. Yes, fluid layout, grid layout, and responsive design in Late 2000s (though Audi had responsive in 2001).
7. jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React - none of the implementation details of these things matter. The only things that matter are how things appear to the user- like whether a page has a clunky refresh or smooth transition and whether things update automatically when they are changed elsewhere. Also, Applets, Flash, frames, and the move to JS all screwed the visually impaired.
8. Cookies mattered because they were used to track users in ways they didn't want to be tracked. People disabling JS for a while mattered. US announcing Java was insecure mattered. Flash and Flash being abandoned mattered.
9. Forgot to mention frames in Mid/Late 1990s.
10. As you mentioned oAuth, SSO becoming a big deal in the Late 2000s with Facebook, Google.
And I should have mentioned blogging, microblogging, move of much of the web to Facebook, Tor/private web, peer sharing and impact on music industry as well as impact on the value of well-created data and applications vs. the value of constantly creating data and making data available and clear.
Despite all of the things I missed, the point is that the things that really matter aren't new libraries and frameworks- they are technology and how the world uses it. If a user can't tell a positive difference between something you were doing 5 years ago and today, then you didn't really innovate.
The new Devs are basically doing this by default. They are early adopters on the hype cycle and so leverage it for renum since it disrupts supply every time.
Play the game old bean, it hasn't changed :)