Hacker News new | comments | ask | show | jobs | submit login

I think it's quite interesting to see that originally node.js was presented as a bloat-free alternative to "enterprise languages" like Java, C# or even Python or Ruby. A lot of complexity was subsequently added in an ad-hoc way which has resulted in (for example) a package management system that's wildly out of control.

It's very popular of course, so I'm definitely not arguing that metric. However, the stuff that was originally called examples of tooling that exhibits unneeded bloat and complexity (Maven) is now reimplemented in Javascript, but poorly (npm).

I think this is a story that gets repeated lots of times in our world of open source software dev.

1. X is SO bloated and poorly engineered full of bad legacy decisions.

2. We can totally do better let's invent a new thing, Y!

3. Wow, Y is so clean and fast and understandable.

4. But it doesn't do this thing a bunch of people reasonably really need... let's add it.

(repeat 3 and 4 a few hundred times)

5. Y is so bloated and poorly engineered and full of legacy decisions. We can do better! (Go to 1.)

The grass is always greener, but mature complicated software is _usually_ complicated for... reasons.

Not everything that gets added is stuff that people reasonably need, either. If you cater to everyone's needs, then you'll end up with 10 solutions for the same problem, because every one of your users has their preferred one.

I think Javascript suffers from this quite a bit. ES6 "classes" should never have made it in, for example. Not only did they add an extra level of abstraction for beginners to learn, but the only reason for doing so was "my code doesn't look like it does in other languages"...

> but the only reason for doing so was "my code doesn't look like it does in other languages"...

As much as I beat the FP drum these days at work, I find the class syntax a much nicer way of organizing solutions to certain, pardon the pun, classes of problems.

Whether or not you find this to be semantic diabetes is a matter of taste, I suppose. I'm curious what, specifically, you find to be the major issue that makes you say they should have been left out.

There's very few things JS classes can do that ES6 modules/named exports along with closures returning plain objects can't do better, in term of code organization, isolation, and extensibility. For the 6 times a year where I need an actual class (it does happen!), I can write the prototype code.

The main issue with adding classes is that they're very, very complex if you want to make them useful. The initial version was pretty harmless, but it was also almost pointless. Now they need to back fill all of the missing features (eg: private fields), which brings in an enormous amount of complexity. Most of the time, if I need private fields, I can just use symbols (not quite private, but close), or I can do

function() { let private = 123; return { // use private in functions here } }

It adds (there ARE things classes are better add) very little compared to the insane amount of work that has to be put in the language to get it all working. Decorators are in a similar boat, where many decorator usages can be expressed just as easily with a higher order function, so adding the extra syntax is just bloat.

The cost isn't worth the reward.

The biggest thing classes give us that is very difficult to replace in vanilla javascript is a semantic construct that is easy to statically analyzed. In the typed flavors (Flow, TypeScript), I can analyze the interface of plain objects, but not in vanilla JS. Part of why React using ES6 classes can be useful.

Thats a great benefit, but Im not sure it's worth the trouble.

> There's very few things JS classes can do that ES6 modules/named exports along with closures returning plain objects can't do better

That's a really good point. I was about to disagree with you but then I created a thought experiment.

Thought Experiment:

I wonder what the JS landscape would look like if ES6 Modules were introduced as part of ES5 about 8 years ago? I could definitely see how that would make classes fare less appealing if we already had a great module system (sure CJS existed but browser didn't support it).

Looking at the timeline of when these features were implemented in all major browsers:

* ES6 Class[0]: implemented 2.5 years ago * ES6 Modules[1]: implemented 1 month ago

[0]: https://caniuse.com/#feat=es6-class [1]: https://caniuse.com/#feat=es6-module

And ES6 classes were designed long before they were implemented, around the time people were making class hierarchies with backbone and AMD modules were the future.

The rise of Java and the OOP revolution isn't that far behind us (2 decades seems like a lot in the tech world, but its still within a single generation of humans).

>In the typed flavors (Flow, TypeScript), I can analyze the interface of plain objects, but not in vanilla JS.

I've found that developing JavaScript in an IDE that reads and validates type information from JSDoc allows me to introduce strong typing while maintaining the flexibility and simplicity of vanilla JavaScript without getting as bogged down as I get with Typescript.

For sure, though my comment referred more to analysing code for things like automatic transformations at scale (like codemods) than during development. Like, figuring out that a stateless function component is a component is hard.

You can do almost everything with jsdoc comments in flow and TS, of course. It's awesome.

ES6 classes were such a relief compared to the prototype bloat you had to right. I love syntactic sugar that makes my life easier.

ES6+ flavors of JS & Typescript really made me take web programming seriously again.

I loved the classes at first, but as I've got more in to it I've found that classes and prototypes are redundant. Closures let me maintain all of the state that I need without the extra boilerplate.

Or, you know, you could actually learn the language you use. Prototypes are not nearly as bloated as classes, and you usually don't even need them. OO is not the one true paradigm of coding.

Agreed, classes with typescript and Vue make sense to me in a SFC approach.

Something about components just fits the class model well.

also agree, would not go back. TypeScript is savage

> I think Javascript suffers from this quite a bit. ES6 "classes" should never have made it in, for example.

Well, I'm really happy that the class-statement got in ES6 though.

Before that, whenever I needed something class-like, I had to search how do you do this in Javascript, and find five different answers, no but really which is the proper one for prototype-based inheritance, waste an hour (or more) and frankly I still don't know, there's just so many ways you can do it, which is the RIGHT one? and Javascript wraps in on itself in so many cool ways, but there were never any definitive answers, just more rabbit holes.

Now, there is the class-statement, and it's one less rabbit hole to get trapped in. I actually get more stuff done now that there is one right way to define a class, or class-like object with a constructor, properties, methods, etc.

Similar thing goes for the function arrow notation. Javascript, and the event-based environments it usually operates in, wants you to use anonymous functions a lot. But the relatively verbose way to define them, still held me back from using them freely as much as I wanted, trying to "optimize" them away if possible.

Yea, I don't use the 'class' keyword in ES6 at all (but then I keep to a fairly functional style in my JS). Modules and lambdas are the killer features in ES6 as far as I'm concerned.

Yes, because the loop never solves the original problem. It's about organization and bloat, not pure speed and leanness.

Rebuild from scratch but also recreate all the existing functionality in a much better standard library and finally the chain can be broken. But nobody wants to do that.

I believe this is what Microsoft is trying to do with .NET Core. It's been successful so far, though they aren't at feature parity yet.

Yes, arguably the .NET Framework almost did it and is still one of the most productive frameworks available, but .NET Core has definitely improved things substantially. It's fast, well-designed, and full-featured and I expect usage to pick up greatly.

What I don't like in Microsoft's frameworks is that they've made lots of things multiple times in slightly different variations, like they always do with all their software (10 variations of each type of programs which were outdated before they were finished). Mostly it exists due to historical reasons, but it only underlines the problem of a multi-billion corporation having design skills of a sophomore. They redo and redo things, bloating their frameworks and increasing their number and you have to guess which CookieContainer you should use this time. It makes me understand why language designers like Rust developers insist on a small core library. Because it's better to have one separate library that will do everything regarding Cookie management (and you could control its functionality by including additional traits from it), than to have incompatible variations of it in the standard library and in each framework.

Android Frameworks will make you love .NET variations.

So far the package manager hell has been kept in check because they keep re-doing everything in such a way that you don't intermingle it. So when you're on MVC5 you're on MVC5, and when you're on AspNetCore, you're on that. You're not using 5 of library x and 6 of libarary Y. Likewise the startup and DI stuff has all fully rebooted twice in a few years. But nonetheless, some of that sort of package hell has already seeped in, where you're using different packages that depend on different versions of some underlying thing with breaking changes. I think the choices are either keep rebooting everything or stop making new stuff.

Yes, but it's rapidly getting better with .NET Standard combining all the libraries into a single definition that can be used on any framework implementation.

MVC5 was never released though, and the changes have been rather minimal from ASP.NET Core v1 to v2 with straightforward migration guides, so it might look messier than it actually is if you were working through all the previews and release candidates instead.

Nevertheless, Microsoft has a long history of having messy v1.0 with most of the stability coming after v2.0, so you can consider the foundation pretty stable now that it's on v2.1 and more.

Can I use F# with it? Because I would love to learn me some F# some day.

F# with .NET Core? Yes, it works fine.

There are some challenges coming up with design changes to the compiler and C# that might overlap what F# already has but it'll get sorted out.

.net core F# support has been pretty great from the initial stages of .net core from my basic usage. Biggest challenge seemed to be around type providers (F# system of generating strongly typed classes from dynamic data such as XML, CSVs, HTTP etc) but that's largely resolved. More info at https://github.com/fsprojects/FSharp.TypeProviders.SDK

Great resources for getting started with F# at https://fsharp.org/

My personal preference is generally to install the SDK and use the http://ionide.io/ with VScode as it seems to work most reliably cross platform.

I'd be very much interested how anyone is using F# on Linux without mono.

I have .NET Core but the whole thing seems to require Mono and it isn't clear from fsharp.org that you can do without.

C# is very verbose and tedious compared to more expressive languages - having to deal with CLR types/API at runtime while using a language with very limited expressiveness (C#) is not very productive. It's better than Java if that's what you're aiming at - but JVM has an incredible ecosystem of stuff that works - much larger than .NET core which is not very mature in many areas (recently had to revert to .NET 4.7 because some encryption method used by a government SOAP service we were talking to wasn't supported).

TypeScript and JS underneath is actually quite malleable - you can escape static typing at any point and revert to simple JS object model when things don't map cleanly in the type system - and then still have types at the boundaries - makes meta-programming trivial in some cases - where it would look like a monstrosity in C#.

F# is interesting and has a lot of advantages over C#, but few people seem to be willing to invest the time to pick it up in the .NET community.

So I don't really view .NET core as a superior alternative, I've worked in JVM land, they are more mature and while Java sucks there are other languages on top of it as well and are decent to use (Kotlin ~ C#, Scala ~ F#)

> using a language with _very_ limited expressiveness (C#) is not very productive.

o_0. Think you need to check yourself mate.

I believe the productiveness of more "expressive" language tends to be undermined by the loss of productivity that occurs when you're compelled to write blog posts or comment on hacker news about how amazingly productive and expressive your language is.

I can do like 3-4 hours of productive work a day realistically - after that I lose focus. I can push this in some periods - but that's the ammount of time I limit myself to be functional over long term.

If I need to waste that time sifting trough boilerplate than I'm pretty upset because I get less shit done in that time window.

Chatting on forums is a casual brain teaser and keeping up to date on industry stuff.

I don't think "expressiveness" or "boilerplate" are the things that slow me down. I use Go, and I find that it is both expressive and a little verbose, but it's still very simple and there's usually one clear way to do things, so I find that I can move a fair bit faster than I can in C# _or_ Haskell (in the latter case, it might just be that Haskell has a huge learning curve and I'm nowhere near over it).

I would suggest using more personal language when expressing personal opinion and toning down the force (very).

> [I find that] using a language with limited expressiveness (C#) is not very productive for me.

Like, I'd figure you can be mad productive in any language (even COBOL?) although I'm only completely cosy in a couple. There's no need to be so dismissive of the tools that others use.

...as if we wouldn't be writing comments on HN anyway. ;-)

Yes Typescript (also from Microsoft) is fascinating and fantastic at combining the strengths of static typing while still maintaining all the flexibility of dynamic types if necessary, however it's pretty much the only realistic non-academic of such a thing, so basically everything else is pales in comparison if that's what you're looking for.

Why is C# not expressive? It has the DLR and `dynamic` keyword which behaves just like JS typing if that's what you want, because it seems like your issue is really with static typing in general. Functional languages are nice but it seems C# with functional its slowly and carefully integrated functional extensions is actually more productive for most developers.

Dynamic doesn't behave the same as JS typing, you're still using CLR object model and typing rules, you're just losing compile time checks - it gets complicated really fast if you want to do meta programming even with DLR and it's not really ergonomic in C# (like casting/boxing primitive types, etc.)

Think about AutoMapper and then compare it to a TS solution using spread operator. How much boilerplate automapper crap do you see in your typical enterprise C# project ?

And that's not even touching on functional features, like you can't even have top level functions in C#, it's "one class per file dogma" + multiple wasted boilperplate lines and scrolling. I recently rewrote a C# program to F# - didn't even modify much in terms of semantics (OK having discriminated unions and pattern matching was a huge win in one case), just by using higher level operators and grouping stuff - line count went down to 1/3 and was grouped in to logical modules. I could read one module as a unit and understand it in it's context instead of having to browse 20 definition files with random 5 line type definitions. I could achieve similar improvements by rewriting to TS or Python.

C# adds overhead all over the place, people are just so used to it they don't even see it as useless overhead but as inherent problems they need solve - like how many of the popular enterprise patterns are workarounds around language limitations ?

When I bring this up people just assume I'm lazy about writing code - but I don't really care about writing the code out - tools mostly generate the boilerplate anyway. Having to read trough that noise is such productivity drain because instead of focusing on the issue at hand I'm focusing on filtering out the bloat from the codebase.

This sounds like a personal preference for dynamic vs strongly typed.

I could rewrite your entire comment in reverse about how I find C# highly expressive and readable while dynamic languages or Kotlin (blech) are a mess of inconsistent whack-a-doodle experimentation.

But my opinion is useless.

The value in any platform is productivity and if any given team can be productive, it doesn't matter if it's COBOL, RPG-3, Pascal, BASIC, or a functional language like F# or plain old JavaScript.

Actually I like static typing, I mentioned I rewrote a project in F# in like 1/3 of the code from C# solution.

It's more that C# is static typing done poorly IMO - a relatively limited type system that adds overhead compared to dynamic languages or more expressive static languages.

I'm having a hard time understanding what's fascinating about typescript.

I agree it makes JS better. I agree it's a good tool for its purpose.

But "fascinating" ?

It's hardly the most elegant scripting language down there (Ruby, Python, Kotlin and Dart doesn't have to live with the JS legacy cruft).

It has a very small ecosystem outside of the web.

The syntax is quite verbose for scripting.

It has very few data structures (and an all-in-one one).

Very poor stdlib.

Still inherits of important JS warts like a schizophrenic "this".

Almost no runtime support if you don't transpile it (which means hard to debug and need specific tooling to build).

And it's by no mean the only scripting language having good support for typing (e.g: VSCode has great support for Python, including intellisens and type checking).

What's so fascinating about ?

What's fascinates me is that we are still stuck with a monopoly on JS for the most important platform in the world.

Typescript IS javascript, so of course it inherits all of its problems. The data structures and standard libraries are what you get from JS, nothing more. It's called a programming langauge but its more of an extension to JS with a powerful compiler.

The typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.

You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.

Perhaps this video on Typescript from Build 2018 would help: https://www.youtube.com/watch?v=hDACN-BGvI8

> Typescript IS javascript, so of course it inherits all of its problems. The data structures and standard libraries are what you get from JS, nothing more. It's called a programming langauge but its more of an extension to JS with a powerful compiler.

That's pretty much my point.

> he typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.

> You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.

Apparently you haven't read my comment because I clearly says it's not special. Others languages do it to.

> Perhaps this video on Typescript from Build 2018 would help: https://www.youtube.com/watch?v=hDACN-BGvI8

Perhaps this article would help: https://www.bernat.tech/the-state-of-type-hints-in-python/

I think Go does this for me for the most part

Go has been out for almost a decade and they’re still working on the package management story.

I’m super bullish on rust. I feel like it was designed with the right intentions.

Yep. There are some languages that start out trying to solve fundamental productivity issues in previous languages - some more than others.

I think we had a generation of ecosystems with Node, Ruby, Python, that tried to do solve the unapproachable systems around the Java/etc ecosystems and make them more open.

They succeeded, but the next generation seems to have been about solving the plethora of tools that came with those languages. Rust, Go, etc, having first-party tools are trying to improve upon that, and yes I think Rust is by far the best implementation I've seen.

I'm interested to see what the next generation is.

I love rust, but the standard libraries are nowhere near the same abstraction level of nodejs.

All services I've deployed built on rust pulls in a kitchen sink of deps.

Granted. I get a static binary as my end result, so maybe it's fine.

Rust is designed that way, to be fair. They expressly did not want to be batteries included like python is. The reasons are what they are and not particularly relevant to the conversation, but pulling in well designed third party crates is the point.

Speaking from a python user's perspective, the batteries included philosophy works great when you have a neutral implementation. Python does a good enough job, and provides extensibility in a way that I don't need to download a package to do basic things. On the other I have to spend hours trying to find a package in JS that just gets shit done. The third party package way is only required for ui parts because you don't want everything to look generic. But having a good standard library to do non user facing stuff is essential. That's why every node project ends up with a thousand dependencies. Because the language is not batteries included. There in JS there is no "one correct and obvious way to do everything" which makes doing basic programming painful.

Let alone G* when CLU had them in 1975, but lets not rush.

This was a really funny experience for me as a self-taught guy going in the other direction. I started with Node in my spare time, and when I finally got a professional coding job my first project involved Java and Maven. I was kind of dreading it due to Java's reputation as this big bloaty terrible enterprise language, but once I actually got started I was like, "Man, this type safety thing and opinionated build tool thing etc etc are really nice." By no means is it (or any language) perfect, but a lot of the criticism suddenly seemed really overblown.

Currently happening with JSON (instead of XML, instead of C0RBA) despite the brutality of ripping out comments to keep it simple. We now have json schema, soon jslt, json namespaces etc.

To be fair, it's not impossible for some improvement to occur in this process.

Exactly! "originally X was presented as a bloat-free alternative to 'enterprise languages'"

For awhile, "Burn the Diskpacks!" was a battle cry of the Squeak Smalltalk community. That sort of policy fights bloat, but leaves old users in the lurch. I think that we are now to the point where a language/environment can trim bloat while not abandoning old users. If the language has enough annotation, and has the infrastructure for powerful syntactic transformation tools, then basic library revisions can be accompanied by automated source rewriting tools. We were pretty close to it in Smalltalk, without the annotations.

This is why it's so important to have good leadership. For example, Linus Torvalds.

But we can learn from prior mistakes in each iteration and spring clean the software logic. I know it's a lot of effort to seemingly reinvent the wheel each time, but I like to think it does yield some benefit in terms of efficiency and cleaner logic.

Yes, but not all applications need that complexity either, so slim, simple tools are often really useful.

Which really isn't so bad. Y eventually goes corporate, and is still presumably better than X, having learned from its mistakes. But for those who hate the new bloat, along comes Z and the cycle repeats itself. Chicka Chicka Boom Boom.

How do you break the loop?

I think vi -> vim -> neovim shows a pretty good model.

Neovim is an effort to modernize and remove cruft from vim, so they get to keep all the good parts and throw out the backward compatibility. If it works out it can eventually replace vim, not to different to what vim did to vi.

I'd like to see similar stuff done to much of the GNU tools. Make for instance has to worry about backward compatibility and posix compliance that makes it hard to progress. As of today there have been about 12,000 attempts to replace it with something else and I find all of them inferior for one reason or another, they've all reinvented the wheel poorly. If someone had taken the fork and modernize approach we might have something better by now.

It doesn't even have to be a "hostile" fork. The same can be done by the developers of the existing tools.

Text editor and programming language are slightly different things, backward compatibility story is completely different.

Not when the text editor includes a programming language (Vimscript). And backwards compatibility of plugins is a big issue.

I don't think the loop is necessarily bad, it shows progress.

Think about Java, it solved a class of problems that C was unable to address (e.g. unsafe memory, native threads). Thus enabling a new class of programs. But the new class of programs created opportunities for new platforms to solve with the benefit of a clean slate and fresh design having learned from past successes and failures.

I'm increasingly sketical. Maybe we move ahead a few inches each cycle, but it's starting to look distressingly like each generation of programmers has to learn all the lessons that their greybeard predecessors learned the hard way. Then, when they acheived some level of enlightenment, the next batch of bright-eyed whippersnappers comes along to rinse and repeat.

There's a disturbingly low-level of historical knowledge passed along in programming. Some bits and pieces are encountered in a quality Computer Science curriculum, but usually in rarefied, theoretical form, and inevitably balkanized into drips and drabs as part of subject-oriented coursework.

It's interesting to place today's techs on the Java maturation timeline - each became what they thought they hated but realized may have existed for some necessary reasons.

New platforms bring exciting and meaningful evolution often at the cost of what techs like .net and Java have a few decade advantage in. It's also interesting to see what Java devs are innovating with themselves, Scala, Kotlin both have good things happening.

Maybe using one large, inter-syntax friendly world like JVM will help.

When experience is overlooked for youth, we relearn and reimplement the same libraries repeatedly in every new tech to feed some developers needs to build temples to their greatness.

Still, Fitzgeralds quote comes to mind... "So we beat on, boats against the current, borne back ceaselessly into the past." and technology is held back by reinventing the wheel.

The biggest problem i see is the weird hole circa 2006 that arranges with Sun selling to Oracle, that kind of still-birthed Java as the next great language.

That hole I can credit as giving C# the advantage in that tight niche, and stilling the development of the JVM platform in general.

By the time that the rust on JVM improvements were dusted off, all initiative was lost. Java was playing catchup to the competition.

On the other hand, Oracle has probably developed Java further much more and kept Maxime around making it into Graal.

IBM gave up on the first counter proposal, Red-Hat and Google didn't bother to rescue Sun.

So we might even have been left with either Java 6 or being forced to port our applications.

As we're seeing with WhatsApp, guardianship and supporting the direction of a project isn't easy. I'm not sure where Java would have ended up if someone else took it.

Additionally Oracle haters seem to forget Oracle was one of the very first companies to get into bet with Sun regarding Java, with their whole Java based terminals idea and porting all their Oracle Database GUIs to Swing.

I don't think we have to.

Loop, as it might seem, doesn't mean there is no progress made in between.

I think you gotta have a good understanding of the domain and use cases you want to hit (which is really hard, especially so when it's a general purpose programming language whose domain is... everything), and design from the start with a vision of hitting those use cases, instead of having to shoe-horn them in later.

Of course, use cases will still evolve, and your initial understanding is always flawed, there's no magic bullet, designing general purpose software (or a language or platform!) meant to hit a wide swath of use cases flexibly is _hard_.

And then, yeah, like others have said, you need skilled, experienced, and strong leadership. You need someone (or a small group of people) who can say 'no' or 'not yet' or 'not like this' to features -- but also who can accurately assess what features _do_ need to be there to make the thing effective. And architects/designers who can design the lower levels of abstraction solidly to allow expected use cases to be built on top of them cleanly.

But yeah, no magic bullet, it's just _hard_.

As developer-consumers of software, we have to realize that _maturity_ is something to be valued, and a trade-off for immature but "clean" is not always the right trade-off -- and not to assume that the immature new shiny "clean" thing will _necessarily_ evolve to do everything you want and be able to stay that way. (On the other hand, just cause something is _old_ doesn't _always_ mean it's actually _mature_ or effective. Some old things do need to be put down). But resist "grass is always greener" evaluation that focuses on what you _don't_ like about the original thing (it's the pain points that come to our attention), forgetting to take into account what it is doing for you effectively.

Refactor and trim the bloat on the basic libraries, but have a policy where bulletproof automated source rewriting tools are provided in those cases. Perhaps this isn't possible with Javascript, but it might be possible with other languages.

If you think anyone has "bulletproof automated source rewriting tools" I've got a bridge to sell you.

If you think anyone has "bulletproof automated source rewriting tools" I've got a bridge to sell you.

I've used an excellent one. The Refactoring Browser parser engine in Smalltalk. I've used it to eliminate 2500 of 5000 lambdas used in an in-house ORM with zero errors -- all in a single change request. (Programmers were putting business logic into those lambdas.) Like any power tool, it's not stupid proof. However, it gives you the full syntactic descriptive power of the language. So if you can distinguish a code rewrite with 100% confidence according to syntactic rules, then you can automate that rewrite with 100% confidence.

Here's where it can go wrong: If your language is too large and complicated, there the probability you can run into a corner case that will trip you up. Also, it will always be possible for a given codebase to create something which is too hard to distinguish, even at runtime. (You can embed arbitrary code in a Refactoring Browser Rewrite transformation, so you can even make runtime determinations.)

"Bulletproof" isn't "invulnerable." A vest with an AR500 plate will stop certain bullets hitting you in certain places. It won't protect you from being stabbed in the eye or stepping on a landmine. Despite that, it is still a useful tool.

What's the bridge, how much, and is what everyone else is using and/or the next big thing?

You have strong leadership that makes good decisions.

Obligatory xkcd reference: https://xkcd.com/927/

Watching the Javascript poorly reinvent the wheel has been very disappointing. Very simple mistakes like immutable, never changing build releases that Java developers understood 15 years ago are become recent front page news in this community. Ironically, even though all the code is open-source pre-existing knowledge does not get leveraged in the open source world. There's a kind of market failure at work here it seems; the lack of commercial selective pressure results in the flourishing of lots of poorly researched OS solutions.

This issue is not just a lack of learning from past failures, it's an active issue that is systemic to web development, especially node. Everyone wants reinvent the wheel rather than supporting a similar, already existing project. I don't know if it's that everyone wants to be the lead on something or if they all lack group skills, but there is no reason we need dozens of similar, partially functional libraries. I can barely, and I mean sooo very barely get behind the fact that all of these SaaS companies need to create their own versions of frameworks, but it amazes me just how many square wheels there are in the web community. It was one of the major reasons it took me so long to start doing full stack development, just way too many cooks all wanting to make almost the exact same meal, only theirs is more superior.

It’s because the average dev has less than 5 years of experience according to the StackOverlow survey, and web is the fastest growing field inside software engineering.

A large majority of people you’re chiding for not learning from others, don’t even realize those other things exist.

But it's not even independent green developers, it's everyone. Chai, mocha, jasmine, jest, should, expect, lab...omg do we really need another unit testing library? Sure they are all slightly different but there is no reason they all couldn't be condensed down to one or two libraries. Shall we list all the reactive UI frameworks? Or routing frameworks? Everyone is at fault here.

Chai is not a testing framework, it's an assertion library compatible with all the other main testing frameworks you mentioned. Yes, we do need experimentation and innovation in testing frameworks. Jest was a real innovation to the space and is particularly awesome for React testing with it's snapshots feature. This kind of argument never gets made with anything else. "Why can't we all just stick to the Model T. It's perfect."

The Model T is a product, but we are talking about tooling, in that context the same spanner that can fix the Model T can fix the latest Tesla.

The car industry probably wouldn't be as big, if you had to learn a new tool for every new car.

I am all for experimentation and creating something new. But so many of these projects out there are not forks of current projects, they are complete rewrites. Is this because the new project is vastly different? Nope! That is the issue, they aren't extremely different.

The reason this is a problem is because web tech is constantly changing, to the point that so many of these projects end up in the scrap heap far faster than other tech. It causes problems with long term service due to compatibility issues with ever changing dependencies.

I have a feeling that JavaScript, and some other areas of open source, have a popularity contest problem - people building projects not because they're needed or useful, but for that brief moment of Internet fame.

It gets worse when instead of your CV, you get hired by startups based on your Internet fame, or wasting your private life building Github (sorry Gitlab) portfolio.

I have the same feeling about this. Github "collect the most stars" effect?

Half of those are assertions libraries, not unit testing libraries. What are you comparing this list to? What is the appropriate number of unit testing libraries a language should have? Do you scale that number for community size?

Ignorance is curable, but requires the cooperation and desire of those who lack to achieve the cure. From my vantage the world of software development seems filled with mediocre individuals who all think of themselves as the Jon Galt of software.

On average most people are average, yes. Most above average people are average in most situations, even.

And sometimes it’s just resume building or intellectual curiosity itching.

Or people just enjoy building something that scratches an itch for them?

I like the energy around the javascript everywhere movement. So what if they reinvent the wheel, sometimes you find a better wheel and break the rules along the way.

There is something exciting about developers using a language in ways it was never designed. Then having the language change to support the changing ecosystem...

So true, at the end of the day people are working like this because it's the way they feel the most passionate about. You can't really blame them considering how disinterested people can get working on that last 10% of even their own projects.

haha, why do we do this? Should we judge programmers by how finished their thing is? I think it is the most impressive quality when I see it (which is non of my own stuff haha)

It's when you build a business on a technology and then have to re-invest to rebuild the product, that's when it becomes an issue. Think of all the start ups that built running businesses on Angular 1.

as far as i know, angular 1 still works just fine... :)

(as it happens 1.7 was released recently.)

It does but try finding a developer looking to work on Angular 1

> Very simple mistakes like immutable, never changing build releases

Can you expand a bit more? Not sure what this means.

[1] https://en.wikipedia.org/wiki/Npm_(software)#Notable_breakag..., [2] https://www.csoonline.com/article/3214624/security/malicious..., [3] https://news.ycombinator.com/item?id=16087024

And the list goes on and on IMO. What's disappointing is that these were lessons learned a long time ago and now they're being re-learned.

Sounds to me like the list has one item: “the npm registry once allowed users to delete packages”. [2] and [3] have nothing to do with immutability. None of them have to do with reinventing the wheel, either, unless you wanted Node to use Maven for package management?

They don't mean immutable as in language form, but immutable packaging system is the general form where you can add but not remove packages to it so as to not break things, which is the common form among most maven/cargo/hunter/etc.../etc... dependency packaging systems. It's generally considered that npm supporting deleting packages was a major mis-design, which became very public when a popular tiny package got deleted, which then broke so many things, so they learned that the hard way instead of learning from the systems that came before (obviously not cargo, but you get the drift ^.^).

> They don't mean immutable as in language form, but immutable packaging system

I know. [2] and [3] have nothing to do with a package repository you can’t delete things from.

Anyone who's acting like this wouldn't happen in Maven is lying. Just look at what happened when CodeHaus went out of business.

This stuff isn't relevant at all to the talk - he never talks about npm or anything to do with package managers but instead how node does imports etc.

But anyways, [2] is at least a problem in many other package repositories. [1] would probably be a problem for many - given legal pressure (vendor your shit, that's the solution). [3] was a bug, not a design issue - no package management system is immune to bugs.

The one thing Java has is that it uses namespaces, which may help with [2] (but barely). [2] certainly has been a problem in PyPI.

Certainly all of this could happen to PyPI. We see it happen with js more, I think, because js happens to be extremely popular so there's a ton of packages for it and it's also much younger (especially node) than others.

> This stuff isn't relevant at all to the talk - he never talks about npm or anything to do with package managers but instead how node does imports etc.

He does have it in his slides.

Slide titled: "Regret: package.json", last 2 points:

> Ultimately I included NPM in the Node distribution, which much made it the defacto standard.

> It's unfortunate that there is a centralized (privately controlled even) repository for modules.

Yeah, I remembered the package.json bit, but that part still had nothing related to the issues/ mistakes mentioned.

Understood, sorry I wasn't trying to dispute any of your points about said issues/mistakes.

Just trying to clarify that he does actually talk about NPM and his regret about it.

Yeah, I actually only remembered the bit about 'package.json' and not the other quotes as well lol

"The Wheel of Time turns, and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the Age that gave it birth comes again." - R.Jordan

I'm afraid the current frantic pace of reinvention in JS/web might cause the Breaking of the World and throw us into a Third Age where no one quite remembers the true lessons of the Ages before.

More frustratingly, at least for me, is that some of us have been warning about these things for absolutely years without many paying any mind, only for them to keep in happening again. Eg. https://news.ycombinator.com/item?id=16090120

I've found that some people tend to take a ton of pride in the assumption that they have to make mistakes to learn from them, but you almost always want to learn from other peoples' mistakes first. Probably overlaps quite heavily with those people who desperately want to tread on new paths.

The problem is, how do we know that those shouting warnings are not false prophets?

Presumably by reading and understanding their arguments.

But, how do we know which ones we should be reading and understanding their arguments?

That's a good example of the Innovator's Dilemma: the enterprise incumbent is unseated by some "crappy" lightweight solution that is easier to get up to speed and solves enough of the problem. The complexity, accidental and essential, comes later.

Yes, this is exactly what's happening. Existing tools are seen as too complex because people don't seem to be realizing that the complexity is not accidental, but necessary.

I'm not trying to say that Java has no accidental complexity of course, I don't want to open that can of worms :)

Which complexity is actually necessary? Does it change when you have 400gbit, SSDs, and watchOSes? How about 1TB of core memory? If we aren't wrangling with handling 75 spinning disk's connected to a 10mbit network with 13" blurry CRT monitors, perhaps we don't need to discuss the finer points of engineering efficient client/server LOB applications? Perhaps we ought to discuss MDM, RF, ML, and lifestyle impacts.

This is all just the process of evolution at play. What seems obvious today wasn't yesterday - applies to biology, material science, medicine, engineering, art, music, architecture, design, taxi services, marketing, government, politics, and so why shouldn't it be so in computing?

Sidenote: I love the humility of this video. I remember the days when node was first unleashed. I could not have imagined how it has changed the way we all work. It all seemed so obvious from day one, and here we are today. What a brilliant contribution.

Well there is also a problem with people who don't want to say no, and don't want to stop working on their project when it is finished. Adding all the features that appease a different 1% of your user base is what leads to the bloat - and it still is bloat when 90% of your users will never use the functionality that externalizes all kinds of leaky abstractions and other costs onto them. Just because that bloat may happen to your successor as well doesn't make it right in either case.

Nevertheless, people who need a lightweight language are always able to find one because there is always a new language at that point in its lifecycle. Further, there are languages like Go which seem to be determined to remain easy to get up and running, and don't seem likely to change anytime soon (for better or worse).

It will be interesting to see if some new language eventually seeks to disrupt Go by "out Go'ing" Go, returning to its approachable roots.

(Amusingly, my iPhone autocorrect replaced "roots" with "Russ". Russ Cox is an engineer who works on Go. :)

See: mongodb

This is pretty much a perfect example. That and dynamic languages, although dynamic languages happened because the happy medium of types is type inference, and previous popular typed languages were too verbose/inflexible.

We're definitely reinventing the wheel a lot, though.

now a publicly listed company with $2.55bn in market cap

and that makes it a good database, right?

I think the truth is even sadder. The new generations hear the complaints of the old. They hear, I hate spring, Java is so bloated, XML hell, etc. So they think damn I don't want to touch Java with a ten foot poll.

That's when they go instead with the newer system, that didn't exist long enough to have accumulated criticism. Which is backed by enthusiasts still in the honeymoon period.

Yes so true. People should spend more time thinking about the side effects of their actions and speeches before they make them.

No, people should speak what they believe to be true. The onus is on the listeners to not blindly accept everything they hear as reality.

When an authority figure says something, listeners are more likely to accept it, even if it is wrong. That's just human nature. So authority figures have an extra duty to think about the effects of what they're saying.

They owe their success to these people and so the way that they can pay it back is by using their voice as a tool for improving things.

I think the problem with "bloat-free" is it's a fine ideal until you try to solve any kind of reasonably complex problem, and honestly it starts to creep in even when you're solving something that isn't particularly complex.

Here's a concrete example. Your classic node.js or express.js sample app is something fairly simple like a hello world, or an IM server. A more complex sample probably looks something like that venerable nodecellar app from a few years back. In all cases the spiel is, "Hey, look how easy it is to create a web server with node."

Except that I'm looking at my node server source right now - for an honestly fairly simple app containing a handful of pages and a blog - and here's what I have:

- Routing (obviously)

- Cookie and body parsing

- Session management

- MongoDB integration

- Passport.js for authentication with a couple of providers (FB and Twitter)

- File system access

- HTTPS and SPDY/HTTP2 support

- Compression support

- Logging with winston and morgan, including loggly integration

- Referer spam filtering

- Pug templates

- Hexo blog integration

- Path resolution support

- Request validation and redirects

- Static content support

- Stripping headers such as X-Powered-By, and adding other headers such as the all-important X-Clacks-Overhead

- Error handling

There's probably a couple of other items I missed, but you get the idea. It seems like a lot but, as far as I'm concerned, this is express.js app MVP for anything you might want to put into production.

I haven't even mentioned the gulpfile I use to build all this, which targets desktop, mobile, along with embedded versions for a particular mobile app due to launch in the near future, and has become something of a behemoth[1]. Nor have I mentioned that I have Cloudflare to sit in front of this, primarily to deal with the heavy lifting of some of the media files I serve up.

On the face of it, this might feel like "bloat" but it's all necessary to run the application and, like I say, a lot of it is the bare minimum for an MVP web app in node.

[1] Yes, I know I could/should switch to webpack, but gulp works, and switching to webpack "just because" doesn't justify itself with the value it might add.

IMHO, the point of using a no-frills library/framework is because you want you write the rest of it yourself. The advantage of this is that it meets your requirements exactly and is therefore smaller/less complex.

When doing a project that takes only a few weeks, I would probably choose a framework that has everything in the box. But if you are building something that is going to be developed over a period of years, the reduction in complexity achieved by building your own can be life saving.

When I look at your list, most of the things fall into the categories of "Pretty easy to implement" or "Don't want at all". However, there is an advantage for not reinventing the wheel if there is no reason to do so. If there is a nice library that gives me what I want and doesn't impose itself too much on the design, I will use it. But the main advantage for not baking it into a big framework is that I can pick and choose what I want.

As an older programmer, I come from an era where libraries and frameworks cost a lot of money. We built stuff by hand because there were not a lot of other choices. These days, though, virtually every library and framework is free software (not only free of charge, but you get source code too!) It's like living in Candy Land, and I'm not about to complain about it :-) However, I think that programmers today reach too quickly for the pre-built and do not understand the long term advantages of bespoke development. Like most things, there is a balance to be maintained.

The gist, every dependency you bring in your app should be thought about, really hard.

If your using 10% of the lib, just implement it yourself.

If the lib is critical, like openssl, bring it in. Other people have solved the hard problems for you.

But yeah, it's a balance.

Unless you're using something that has competent dead code elimination, like Google Closure Compiler. Then you can go ahead and include 30mb of libs, knowing that only a fraction will actually be shipped in the production version.

I don't come from that era but couldn't agree more, every time one of my coworkers suggest using a library I tell them that is ok as long as they maintain it, I prefer to spend my time coding and like to understand as much of the codebase as I can, instead of having to learn and maintain tens of external libraries. Specially if we only need a couple of functions from that library.

I normally lose those debates though, and the thing reaches a point where the complexity of the code makes it impenetrable.

This why I love Java so much. Take a look at what Spring Boot does for me:

- Routing is done in two lines of code: @Controller @RequestMapping("/myroute")

- Cookie and body parsing - no need to write any code to do that, I just have method parameters and all of the data flies in. Whant a validation? Only one keyword on a method parameter - @Valid. Custom validators are supported as well.

- Session management. It just here for me and does the right thing by default. I can replace storages with custom implementations but by default no code is required from me.

- MongoDB integration - Spring Data MongoDB and you only need to define interfaces using naming convention. The code to access the actual database is generated for you.

- Spring Security supports multiple authentication mechanisms and gives you neat DSL to configure it.

- File system access kind obvious thing.

- HTTPS and HTTP2 support provided by Spring MVC as well.

- Compressions support - it just "server.compression.enabled=true" in your config

- Logging - slf4j + logback come with Spring Boot and there plenty custom appenders available to put you logs into logstash/splunk whatever

- Referrer spam filter - not sure about that one but CSRF protection comes OOB and enabled by default.

- Multiple tightly integrated template engines to chose from. Zero configuration code as well.

- Static content comes OOB and enabled by default, just put your stuff into resources/static.

I mean yeah, modern webapp is a complicated thing! So whenever I see somebody trying to do anything "not bloated" it means that I end up writing low level code that has been written multiple times again and again.

The other day I was trying to code a simple thing in Clojure because I love Lisp. Well, it's just embarrassing. I got to simple page showing stuff from Postgres and the boilerplate/business code ratio was at about 70%. Manually configure connection pool, manually start it, manually prepare your component systems, manually mention all of the dependencies for components, manually configure template engine, manually enable static resources support in ring, manually configure and enable session support in ring. Then we come to authentication and don't even try to sell me Friend. EVERYTHING is manual. The only good thing was "environ" which did the right thing but again with "bloated" Spring Boot it comes OOB and I don't need to configure it!

If you don't use something "bloated" it only means that you're writing code yourself, again and again.

No, Spring Boot is one of the best examples of the worst kind of terrible patterns in the land of Java development. The bloat in that framework is awful, and the gods help you if something goes wrong in the annotations-everywhere code for anything but the most trivial of applications.

If you limit the annotations to only the basics,(controller,config,bean,requestmapping,etc.) - not much can go wrong. It sounds to me like you haven't worked with any large spring boot apps and experienced the stability annotations can provide.

Except circular dependencies. I work on an app where a circular dependency failure happens depending on what order spring finds our annotated classes. It made writing a faster bean scanner a little tricky because I had to replicate Springs ordering method.

To clarify what you said - You wrote a custom "fast bean scanner" and it's not working properly? Or you had to re-write it because spring's bean scanner wasn't working? What version of spring is this?

Ah, the spring bean scanning was working, but startup wasn't. The reason? Our app was apparently very brittle and the mere act of registering beans in the wrong act would cause a circular dependency error during startup.

To be more concrete, to the original spring bean scanner, we were passing in a set of package names, which it would scan. Spring registers those bran defininitions in yhe order that it scans the beans. My custom scanner (found and registered all the same beans), broke our app because it wouldn't startup anymore due to a circular dependency error. Once I sorted the bean definitions by the original pack path inputs, that startup error went away.

I think we are 4.x

Extra details: I used the fast -classpath-scanner library. I subclassed the annotation cand date componend scanner class (well, something like that), and rewrote a method to load the resources for the string specified, treating the path a a fqcn, not a package path. Then I could feed that class the output from the fast classpath scanner (which was the list of classes with the annotations). Until I sorted the input by the original package paths, my app wouldn't start. Mind you, the method I overwrote simply created bean definitions. But that ordering difference made all the difference.

I can dig up exact class names if you are curious. The scanner of course didn't replicate all spring bean sear check capabilities - just the ones we were using. But it cut the scan time by 60% (several seconds).

Could you provide an example of what you consider terrible?

Agreed. Spring Boot is how you do frameworks right.

It is opinionated and provides libraries and solutions for almost everything you need to do, BUT it always allows you to use your own if required.

I love Spring Boot and I wish there was something even remotely as good and full featured in other languages.

I'm switching to Rails after doing years of Spring Boot apps. The bloat of Java is 3x to 5x.

Spring Boot 2 with Kotlin can be a lot less bloated, especially with the async Reactive Web option which uses Netty.

Good luck running it fast enough.

Meh, not really an issue.

Have you tried Luminus? It comes configured with Buddy for auth though the docs for both Luminus and Buddy could do with some work.

I haven't. But I definitely will try to use it, I'm really eager on getting to the same level of productivity in Clojure that I have in Java with Spring. I have a huge hope that Clojure + something like Spring Boot for it could make me even more productive. Some of the stuff that we have in Clojure really is wonderful, hugsql for example.

You'll never get anything close to Spring Boot for Clojure. Spring grew out of J2EE and Clojure's culture is diametrically opposite, favouring curated libraries. Clojure's main/(only?) web framework - Luminus - has a very small team behind it, though it does a fine job.

What are the barriers to using the Spring stuff through Clojure's interop capabilities?

Nevermind the additional complexity of integrating with payment gateways and handling complex authorization requirements and team invitations etc.

Even a 'simple' web app is a convoluted mess of shit if you are to run a real world production grade system. I'm so sick of all these 'hello world' toy examples.

It's interesting that you mention gulp and webpack when those tools too are now considered too complex and set to be usurped by something like Brunch.

It's a shame these tools keep being rewritten because there are definitely good ideas in all of them, but for some reason they can't seem to be unified.

All these tools start off as simple alternatives to the existing bloated tools. Then as they gain more and more features to support real-world situations they end up becoming as bloated as the tools they set out to replace.

Webpack and Gulp were never meant to be simpler alternatives. Gulp used streams instead of serially processing files like Grunt, which made it faster, but obviously streams working in parallel are more complex than just processing the files one by one.

Webpack pulls dependencies into one file and deduplicates them. This is obviously even more complex since now you have dependency resolution logic as well as dealing with the various module systems JavaScript has invented.

Right. I'd just hope that at some point that cycle ends and people try to fix existing tools instead of replacing everything wholesale with something new that will eventually fail again.

Sorry, there is always a new developer thinking "I can do this much better if I just start from scratch, getting rid of all the bloat".

> It's interesting that you mention gulp and webpack when those tools too are now considered too complex and set to be usurped by something like Brunch.

While Webpack is a little dense, it appears to strike the right balance between complexity and customizability (and probably more important for longevity, library buy-in). It doesn't seem like anything on the horizon is going to unseat it anytime soon... certainly not Brunch.

> While Webpack is a little dense, it appears to strike the right balance between complexity

I don't know, I thought the same thing about Browserify.

And now Parcel is here, gaining steam...

> usurped by something like Brunch

I've been out of Node for like 6 months, wth happened! I give up!!

Brunch is not usurping Webpack. But these are tools built on top of Node.js. They're for front end development, and have nothing to do with running a Node.js server on the backend.

Brunch was between Gulp and Webpack, so don't worry! And since Webpack there's only been Rollup and Parcel to consider :)

Somewhere along the line, we as developers have abandoned the Unix philosophy, especially the oft-forgotten second part ("Write programs that do one thing, and one thing well; and write programs that work well together").

Without the ability to compose multiple small libraries to form the exact solution that we need, we had little choice but to rely on the One True Framework to solve every problem that we will have.

This means if the One True Framework doesn't serve the exact need you have (and it almost definitely won't, there's a combinatorial number of requirements out there), it's time for a rewrite!

NPM is the closest environment to the Unix philosophy apart from Linux. Lots of small packages instead of a large base class library like all the other language ecosystems.

The thing is, even with Linux distros, most of the stuff you want is built-in by the distro. Once you start to add your own stuff, it can get really ugly and you have to be really pro to get anything done. It seems like every time I'm working on updating a Linux image I have to do some really bizarre thing where the package manager doesn't even work right and the instructions or some forum have me doing some mind-blowing workarounds I don't even understand.

So I think you are combining two different topics. I am all-in for libraries over frameworks. But the larger, more heavily curated libraries where you only need minimal customization are just objectively better. Having a large, curated standard library != a framework.

It's always like this.

I'm lucky enough to have been around the industry for a while. I could probably count a dozen or more things that started out "Like X, only without all the BS!" --- only to end up with just as much BS or more than X ever had.

It's an anthropological effect, not a technological one. A new generation of craftsmen faces a choice between submitting to the rules of the old guard and making up their own rules.

Reduced complexity is often a rallying cry, but I think the root of the phenomenon is in trying to find one's own social and professional standing in the situation where all the prominent positions are already taken and what little is left requires years of hard labor (complexity, certification, corporate review system, etc).

If this situation upsets you consider the alternatives, they might be worse.

This reminds me of when I first read about fractals: a collection of phenomena I've been staring at all my life, but never really saw until someone pointed out how to see them.

Currently popular music mostly sounds like noise to me, but that's not the point. What are the current generation of musicians supposed to do, be silent and spend their lives listening to the great bands of my youth? It's impossible to match Pink Floyd in the style of Pink Floyd. They need a new style.

Facebook is losing traffic to whatever is the latest trend in social media, not really because people are suddenly paranoid about privacy, but because each generation needs a network where the previous generation is not.

And for as long as humans write programs, there will be a need to invent new languages, not because the old languages were technically inadequate, but because each generation of programmers needs a way to escape the shadow of the previous generation, the way acorns need squirrels to carry them away from the shadow of oak trees.

Exactly. It's not just music, consider modern and post-modern art. Even earlier art can be seen as a form of protest against the tyranny of the establishment of yore.

It's interest to see which parts of our civilization came which side of the divide. Market economy, for example is a great way for a young enterprising person to find their own footing away from the old (hence startups). Academia OTOH went totally the other way (hence grad school).

..and this is the problem. The assertions of escape. I don't understand it. I would write prose the same way Jack Vance (did) and Gene Wolfe (does) if I could write. There can be one true way for the expression of logical thought. We can try exceptional dialects but they fail because they cannot encompass every concept we should expect and they cater to the ego and domain.

OTOH, the young always have a fresh perspective and they usually have good ideas based on the times. They should be listened to and mentored. Very few active older (1995+) folks left in IT ,after the method management and purposeful purging methods of the last 10 years, to mentor them. Most of us weren't great at teaching anyway. It was a paycheck.

Love the analogy with the squirrels


Tech is ripe for applied group psychology and anthropology. The social, psychological, and anthropological factors are obvious to casual observers -- but completely invisible to the people they affect the most.

There's a reason for it, and perhaps overall it's a good thing....but that still doesn't mean that it can't be accepted and acknowledged as a facet of the community.

Yes, this is not only about tech-sector. You see it everywhere there is humans. There is always new people thinking that why do they do everything so stupid? I can do it much better if I do it this way. Sometimes they are right and everyone is happy, sometimes they do it exactly the same way it was done ages ago and found out to be inefficient or fail in some way and they get to hear "There was a reason we did it the other way".

I try to explain why we do things the way we do and if they still want to try to change things, I make sure it's easy to go back again in case it fails the usual way.

No, not always. Some communities tend to do their research and web developers have a decades-old track record for not doing that.

“X for Humans!” ©

I missed that in the presentation. Are you referencing something else when you say that

> originally node.js was presented as a bloat-free alternative to "enterprise languages"


I am surprised that you've only mentioned npm. The current complexity of webpack and the number of frameworks, the language going over multiple ES revisions and type-safe alternatives to the language makes it seem like the eventual complexity is almost unavoidable and simplicity is a marketing fad.

Maven is still garbage and 100 times more bloated than npm. It literally has all the same issues npm has, only with less support.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact