1. X is SO bloated and poorly engineered full of bad legacy decisions.
2. We can totally do better let's invent a new thing, Y!
3. Wow, Y is so clean and fast and understandable.
4. But it doesn't do this thing a bunch of people reasonably really need... let's add it.
(repeat 3 and 4 a few hundred times)
5. Y is so bloated and poorly engineered and full of legacy decisions. We can do better! (Go to 1.)
The grass is always greener, but mature complicated software is _usually_ complicated for... reasons.
As much as I beat the FP drum these days at work, I find the class syntax a much nicer way of organizing solutions to certain, pardon the pun, classes of problems.
Whether or not you find this to be semantic diabetes is a matter of taste, I suppose. I'm curious what, specifically, you find to be the major issue that makes you say they should have been left out.
The main issue with adding classes is that they're very, very complex if you want to make them useful. The initial version was pretty harmless, but it was also almost pointless. Now they need to back fill all of the missing features (eg: private fields), which brings in an enormous amount of complexity. Most of the time, if I need private fields, I can just use symbols (not quite private, but close), or I can do
let private = 123;
// use private in functions here
It adds (there ARE things classes are better add) very little compared to the insane amount of work that has to be put in the language to get it all working. Decorators are in a similar boat, where many decorator usages can be expressed just as easily with a higher order function, so adding the extra syntax is just bloat.
The cost isn't worth the reward.
Thats a great benefit, but Im not sure it's worth the trouble.
That's a really good point. I was about to disagree with you but then I created a thought experiment.
I wonder what the JS landscape would look like if ES6 Modules were introduced as part of ES5 about 8 years ago? I could definitely see how that would make classes fare less appealing if we already had a great module system (sure CJS existed but browser didn't support it).
Looking at the timeline of when these features were implemented in all major browsers:
* ES6 Class: implemented 2.5 years ago
* ES6 Modules: implemented 1 month ago
The rise of Java and the OOP revolution isn't that far behind us (2 decades seems like a lot in the tech world, but its still within a single generation of humans).
You can do almost everything with jsdoc comments in flow and TS, of course. It's awesome.
ES6+ flavors of JS & Typescript really made me take web programming seriously again.
Something about components just fits the class model well.
Well, I'm really happy that the class-statement got in ES6 though.
Now, there is the class-statement, and it's one less rabbit hole to get trapped in. I actually get more stuff done now that there is one right way to define a class, or class-like object with a constructor, properties, methods, etc.
Rebuild from scratch but also recreate all the existing functionality in a much better standard library and finally the chain can be broken. But nobody wants to do that.
MVC5 was never released though, and the changes have been rather minimal from ASP.NET Core v1 to v2 with straightforward migration guides, so it might look messier than it actually is if you were working through all the previews and release candidates instead.
Nevertheless, Microsoft has a long history of having messy v1.0 with most of the stability coming after v2.0, so you can consider the foundation pretty stable now that it's on v2.1 and more.
There are some challenges coming up with design changes to the compiler and C# that might overlap what F# already has but it'll get sorted out.
Great resources for getting started with F# at https://fsharp.org/
My personal preference is generally to install the SDK and use the http://ionide.io/ with VScode as it seems to work most reliably cross platform.
I have .NET Core but the whole thing seems to require Mono and it isn't clear from fsharp.org that you can do without.
TypeScript and JS underneath is actually quite malleable - you can escape static typing at any point and revert to simple JS object model when things don't map cleanly in the type system - and then still have types at the boundaries - makes meta-programming trivial in some cases - where it would look like a monstrosity in C#.
F# is interesting and has a lot of advantages over C#, but few people seem to be willing to invest the time to pick it up in the .NET community.
So I don't really view .NET core as a superior alternative, I've worked in JVM land, they are more mature and while Java sucks there are other languages on top of it as well and are decent to use (Kotlin ~ C#, Scala ~ F#)
Think you need to check yourself mate.
I believe the productiveness of more "expressive" language tends to be undermined by the loss of productivity that occurs when you're compelled to write blog posts or comment on hacker news about how amazingly productive and expressive your language is.
If I need to waste that time sifting trough boilerplate than I'm pretty upset because I get less shit done in that time window.
Chatting on forums is a casual brain teaser and keeping up to date on industry stuff.
> [I find that] using a language with limited expressiveness (C#) is not very productive for me.
Like, I'd figure you can be mad productive in any language (even COBOL?) although I'm only completely cosy in a couple. There's no need to be so dismissive of the tools that others use.
Why is C# not expressive? It has the DLR and `dynamic` keyword which behaves just like JS typing if that's what you want, because it seems like your issue is really with static typing in general. Functional languages are nice but it seems C# with functional its slowly and carefully integrated functional extensions is actually more productive for most developers.
Think about AutoMapper and then compare it to a TS solution using spread operator. How much boilerplate automapper crap do you see in your typical enterprise C# project ?
And that's not even touching on functional features, like you can't even have top level functions in C#, it's "one class per file dogma" + multiple wasted boilperplate lines and scrolling. I recently rewrote a C# program to F# - didn't even modify much in terms of semantics (OK having discriminated unions and pattern matching was a huge win in one case), just by using higher level operators and grouping stuff - line count went down to 1/3 and was grouped in to logical modules. I could read one module as a unit and understand it in it's context instead of having to browse 20 definition files with random 5 line type definitions. I could achieve similar improvements by rewriting to TS or Python.
C# adds overhead all over the place, people are just so used to it they don't even see it as useless overhead but as inherent problems they need solve - like how many of the popular enterprise patterns are workarounds around language limitations ?
When I bring this up people just assume I'm lazy about writing code - but I don't really care about writing the code out - tools mostly generate the boilerplate anyway. Having to read trough that noise is such productivity drain because instead of focusing on the issue at hand I'm focusing on filtering out the bloat from the codebase.
I could rewrite your entire comment in reverse about how I find C# highly expressive and readable while dynamic languages or Kotlin (blech) are a mess of inconsistent whack-a-doodle experimentation.
But my opinion is useless.
It's more that C# is static typing done poorly IMO - a relatively limited type system that adds overhead compared to dynamic languages or more expressive static languages.
I agree it makes JS better. I agree it's a good tool for its purpose.
But "fascinating" ?
It's hardly the most elegant scripting language down there (Ruby, Python, Kotlin and Dart doesn't have to live with the JS legacy cruft).
It has a very small ecosystem outside of the web.
The syntax is quite verbose for scripting.
It has very few data structures (and an all-in-one one).
Very poor stdlib.
Still inherits of important JS warts like a schizophrenic "this".
Almost no runtime support if you don't transpile it (which means hard to debug and need specific tooling to build).
And it's by no mean the only scripting language having good support for typing (e.g: VSCode has great support for Python, including intellisens and type checking).
What's so fascinating about ?
What's fascinates me is that we are still stuck with a monopoly on JS for the most important platform in the world.
The typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.
You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.
Perhaps this video on Typescript from Build 2018 would help: https://www.youtube.com/watch?v=hDACN-BGvI8
That's pretty much my point.
> he typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.
> You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.
Apparently you haven't read my comment because I clearly says it's not special. Others languages do it to.
> Perhaps this video on Typescript from Build 2018 would help: https://www.youtube.com/watch?v=hDACN-BGvI8
Perhaps this article would help: https://www.bernat.tech/the-state-of-type-hints-in-python/
I think we had a generation of ecosystems with Node, Ruby, Python, that tried to do solve the unapproachable systems around the Java/etc ecosystems and make them more open.
They succeeded, but the next generation seems to have been about solving the plethora of tools that came with those languages. Rust, Go, etc, having first-party tools are trying to improve upon that, and yes I think Rust is by far the best implementation I've seen.
I'm interested to see what the next generation is.
All services I've deployed built on rust pulls in a kitchen sink of deps.
Granted. I get a static binary as my end result, so maybe it's fine.
To be fair, it's not impossible for some improvement to occur in this process.
For awhile, "Burn the Diskpacks!" was a battle cry of the Squeak Smalltalk community. That sort of policy fights bloat, but leaves old users in the lurch. I think that we are now to the point where a language/environment can trim bloat while not abandoning old users. If the language has enough annotation, and has the infrastructure for powerful syntactic transformation tools, then basic library revisions can be accompanied by automated source rewriting tools. We were pretty close to it in Smalltalk, without the annotations.
Neovim is an effort to modernize and remove cruft from vim, so they get to keep all the good parts and throw out the backward compatibility. If it works out it can eventually replace vim, not to different to what vim did to vi.
I'd like to see similar stuff done to much of the GNU tools. Make for instance has to worry about backward compatibility and posix compliance that makes it hard to progress. As of today there have been about 12,000 attempts to replace it with something else and I find all of them inferior for one reason or another, they've all reinvented the wheel poorly. If someone had taken the fork and modernize approach we might have something better by now.
It doesn't even have to be a "hostile" fork. The same can be done by the developers of the existing tools.
Think about Java, it solved a class of problems that C was unable to address (e.g. unsafe memory, native threads). Thus enabling a new class of programs. But the new class of programs created opportunities for new platforms to solve with the benefit of a clean slate and fresh design having learned from past successes and failures.
There's a disturbingly low-level of historical knowledge passed along in programming. Some bits and pieces are encountered in a quality Computer Science curriculum, but usually in rarefied, theoretical form, and inevitably balkanized into drips and drabs as part of subject-oriented coursework.
New platforms bring exciting and meaningful evolution often at the cost of what techs like .net and Java have a few decade advantage in. It's also interesting to see what Java devs are innovating with themselves, Scala, Kotlin both have good things happening.
Maybe using one large, inter-syntax friendly world like JVM will help.
When experience is overlooked for youth, we relearn and reimplement the same libraries repeatedly in every new tech to feed some developers needs to build temples to their greatness.
Still, Fitzgeralds quote comes to mind... "So we beat on, boats against the current, borne back ceaselessly into the past." and technology is held back by reinventing the wheel.
That hole I can credit as giving C# the advantage in that tight niche, and stilling the development of the JVM platform in general.
By the time that the rust on JVM improvements were dusted off, all initiative was lost. Java was playing catchup to the competition.
IBM gave up on the first counter proposal, Red-Hat and Google didn't bother to rescue Sun.
So we might even have been left with either Java 6 or being forced to port our applications.
Loop, as it might seem, doesn't mean there is no progress made in between.
Of course, use cases will still evolve, and your initial understanding is always flawed, there's no magic bullet, designing general purpose software (or a language or platform!) meant to hit a wide swath of use cases flexibly is _hard_.
And then, yeah, like others have said, you need skilled, experienced, and strong leadership. You need someone (or a small group of people) who can say 'no' or 'not yet' or 'not like this' to features -- but also who can accurately assess what features _do_ need to be there to make the thing effective. And architects/designers who can design the lower levels of abstraction solidly to allow expected use cases to be built on top of them cleanly.
But yeah, no magic bullet, it's just _hard_.
As developer-consumers of software, we have to realize that _maturity_ is something to be valued, and a trade-off for immature but "clean" is not always the right trade-off -- and not to assume that the immature new shiny "clean" thing will _necessarily_ evolve to do everything you want and be able to stay that way. (On the other hand, just cause something is _old_ doesn't _always_ mean it's actually _mature_ or effective. Some old things do need to be put down). But resist "grass is always greener" evaluation that focuses on what you _don't_ like about the original thing (it's the pain points that come to our attention), forgetting to take into account what it is doing for you effectively.
I've used an excellent one. The Refactoring Browser parser engine in Smalltalk. I've used it to eliminate 2500 of 5000 lambdas used in an in-house ORM with zero errors -- all in a single change request. (Programmers were putting business logic into those lambdas.) Like any power tool, it's not stupid proof. However, it gives you the full syntactic descriptive power of the language. So if you can distinguish a code rewrite with 100% confidence according to syntactic rules, then you can automate that rewrite with 100% confidence.
Here's where it can go wrong: If your language is too large and complicated, there the probability you can run into a corner case that will trip you up. Also, it will always be possible for a given codebase to create something which is too hard to distinguish, even at runtime. (You can embed arbitrary code in a Refactoring Browser Rewrite transformation, so you can even make runtime determinations.)
"Bulletproof" isn't "invulnerable." A vest with an AR500 plate will stop certain bullets hitting you in certain places. It won't protect you from being stabbed in the eye or stepping on a landmine. Despite that, it is still a useful tool.
A large majority of people you’re chiding for not learning from others, don’t even realize those other things exist.
The car industry probably wouldn't be as big, if you had to learn a new tool for every new car.
The reason this is a problem is because web tech is constantly changing, to the point that so many of these projects end up in the scrap heap far faster than other tech. It causes problems with long term service due to compatibility issues with ever changing dependencies.
And sometimes it’s just resume building or intellectual curiosity itching.
There is something exciting about developers using a language in ways it was never designed. Then having the language change to support the changing ecosystem...
(as it happens 1.7 was released recently.)
Can you expand a bit more? Not sure what this means.
And the list goes on and on IMO. What's disappointing is that these were lessons learned a long time ago and now they're being re-learned.
I know.  and  have nothing to do with a package repository you can’t delete things from.
But anyways,  is at least a problem in many other package repositories.  would probably be a problem for many - given legal pressure (vendor your shit, that's the solution).  was a bug, not a design issue - no package management system is immune to bugs.
The one thing Java has is that it uses namespaces, which may help with  (but barely).  certainly has been a problem in PyPI.
Certainly all of this could happen to PyPI. We see it happen with js more, I think, because js happens to be extremely popular so there's a ton of packages for it and it's also much younger (especially node) than others.
He does have it in his slides.
Slide titled: "Regret: package.json", last 2 points:
> Ultimately I included NPM in the Node distribution, which much made it the defacto standard.
> It's unfortunate that there is a centralized (privately controlled even) repository for modules.
Just trying to clarify that he does actually talk about NPM and his regret about it.
I'm not trying to say that Java has no accidental complexity of course, I don't want to open that can of worms :)
This is all just the process of evolution at play. What seems obvious today wasn't yesterday - applies to biology, material science, medicine, engineering, art, music, architecture, design, taxi services, marketing, government, politics, and so why shouldn't it be so in computing?
Sidenote: I love the humility of this video. I remember the days when node was first unleashed. I could not have imagined how it has changed the way we all work. It all seemed so obvious from day one, and here we are today. What a brilliant contribution.
(Amusingly, my iPhone autocorrect replaced "roots" with "Russ". Russ Cox is an engineer who works on Go. :)
We're definitely reinventing the wheel a lot, though.
That's when they go instead with the newer system, that didn't exist long enough to have accumulated criticism. Which is backed by enthusiasts still in the honeymoon period.
They owe their success to these people and so the way that they can pay it back is by using their voice as a tool for improving things.
Here's a concrete example. Your classic node.js or express.js sample app is something fairly simple like a hello world, or an IM server. A more complex sample probably looks something like that venerable nodecellar app from a few years back. In all cases the spiel is, "Hey, look how easy it is to create a web server with node."
Except that I'm looking at my node server source right now - for an honestly fairly simple app containing a handful of pages and a blog - and here's what I have:
- Routing (obviously)
- Cookie and body parsing
- Session management
- MongoDB integration
- Passport.js for authentication with a couple of providers (FB and Twitter)
- File system access
- HTTPS and SPDY/HTTP2 support
- Compression support
- Logging with winston and morgan, including loggly integration
- Referer spam filtering
- Pug templates
- Hexo blog integration
- Path resolution support
- Request validation and redirects
- Static content support
- Stripping headers such as X-Powered-By, and adding other headers such as the all-important X-Clacks-Overhead
- Error handling
There's probably a couple of other items I missed, but you get the idea. It seems like a lot but, as far as I'm concerned, this is express.js app MVP for anything you might want to put into production.
I haven't even mentioned the gulpfile I use to build all this, which targets desktop, mobile, along with embedded versions for a particular mobile app due to launch in the near future, and has become something of a behemoth. Nor have I mentioned that I have Cloudflare to sit in front of this, primarily to deal with the heavy lifting of some of the media files I serve up.
On the face of it, this might feel like "bloat" but it's all necessary to run the application and, like I say, a lot of it is the bare minimum for an MVP web app in node.
 Yes, I know I could/should switch to webpack, but gulp works, and switching to webpack "just because" doesn't justify itself with the value it might add.
When doing a project that takes only a few weeks, I would probably choose a framework that has everything in the box. But if you are building something that is going to be developed over a period of years, the reduction in complexity achieved by building your own can be life saving.
When I look at your list, most of the things fall into the categories of "Pretty easy to implement" or "Don't want at all". However, there is an advantage for not reinventing the wheel if there is no reason to do so. If there is a nice library that gives me what I want and doesn't impose itself too much on the design, I will use it. But the main advantage for not baking it into a big framework is that I can pick and choose what I want.
As an older programmer, I come from an era where libraries and frameworks cost a lot of money. We built stuff by hand because there were not a lot of other choices. These days, though, virtually every library and framework is free software (not only free of charge, but you get source code too!) It's like living in Candy Land, and I'm not about to complain about it :-) However, I think that programmers today reach too quickly for the pre-built and do not understand the long term advantages of bespoke development. Like most things, there is a balance to be maintained.
If your using 10% of the lib, just implement it yourself.
If the lib is critical, like openssl, bring it in. Other people have solved the hard problems for you.
But yeah, it's a balance.
I normally lose those debates though, and the thing reaches a point where the complexity of the code makes it impenetrable.
- Routing is done in two lines of code:
- Cookie and body parsing - no need to write any code to do that, I just have method parameters and all of the data flies in. Whant a validation? Only one keyword on a method parameter - @Valid. Custom validators are supported as well.
- Session management. It just here for me and does the right thing by default. I can replace storages with custom implementations but by default no code is required from me.
- MongoDB integration - Spring Data MongoDB and you only need to define interfaces using naming convention. The code to access the actual database is generated for you.
- Spring Security supports multiple authentication mechanisms and gives you neat DSL to configure it.
- File system access kind obvious thing.
- HTTPS and HTTP2 support provided by Spring MVC as well.
- Compressions support - it just "server.compression.enabled=true" in your config
- Logging - slf4j + logback come with Spring Boot and there plenty custom appenders available to put you logs into logstash/splunk whatever
- Referrer spam filter - not sure about that one but CSRF protection comes OOB and enabled by default.
- Multiple tightly integrated template engines to chose from. Zero configuration code as well.
- Static content comes OOB and enabled by default, just put your stuff into resources/static.
I mean yeah, modern webapp is a complicated thing! So whenever I see somebody trying to do anything "not bloated" it means that I end up writing low level code that has been written multiple times again and again.
The other day I was trying to code a simple thing in Clojure because I love Lisp. Well, it's just embarrassing. I got to simple page showing stuff from Postgres and the boilerplate/business code ratio was at about 70%. Manually configure connection pool, manually start it, manually prepare your component systems, manually mention all of the dependencies for components, manually configure template engine, manually enable static resources support in ring, manually configure and enable session support in ring. Then we come to authentication and don't even try to sell me Friend. EVERYTHING is manual. The only good thing was "environ" which did the right thing but again with "bloated" Spring Boot it comes OOB and I don't need to configure it!
If you don't use something "bloated" it only means that you're writing code yourself, again and again.
To be more concrete, to the original spring bean scanner, we were passing in a set of package names, which it would scan. Spring registers those bran defininitions in yhe order that it scans the beans. My custom scanner (found and registered all the same beans), broke our app because it wouldn't startup anymore due to a circular dependency error. Once I sorted the bean definitions by the original pack path inputs, that startup error went away.
I think we are 4.x
Extra details: I used the fast -classpath-scanner library. I subclassed the annotation cand date componend scanner class (well, something like that), and rewrote a method to load the resources for the string specified, treating the path a a fqcn, not a package path. Then I could feed that class the output from the fast classpath scanner (which was the list of classes with the annotations). Until I sorted the input by the original package paths, my app wouldn't start. Mind you, the method I overwrote simply created bean definitions. But that ordering difference made all the difference.
I can dig up exact class names if you are curious. The scanner of course didn't replicate all spring bean sear check capabilities - just the ones we were using. But it cut the scan time by 60% (several seconds).
It is opinionated and provides libraries and solutions for almost everything you need to do, BUT it always allows you to use your own if required.
I love Spring Boot and I wish there was something even remotely as good and full featured in other languages.
Even a 'simple' web app is a convoluted mess of shit if you are to run a real world production grade system. I'm so sick of all these 'hello world' toy examples.
It's a shame these tools keep being rewritten because there are definitely good ideas in all of them, but for some reason they can't seem to be unified.
While Webpack is a little dense, it appears to strike the right balance between complexity and customizability (and probably more important for longevity, library buy-in). It doesn't seem like anything on the horizon is going to unseat it anytime soon... certainly not Brunch.
I don't know, I thought the same thing about Browserify.
And now Parcel is here, gaining steam...
I've been out of Node for like 6 months, wth happened! I give up!!
Without the ability to compose multiple small libraries to form the exact solution that we need, we had little choice but to rely on the One True Framework to solve every problem that we will have.
This means if the One True Framework doesn't serve the exact need you have (and it almost definitely won't, there's a combinatorial number of requirements out there), it's time for a rewrite!
The thing is, even with Linux distros, most of the stuff you want is built-in by the distro. Once you start to add your own stuff, it can get really ugly and you have to be really pro to get anything done. It seems like every time I'm working on updating a Linux image I have to do some really bizarre thing where the package manager doesn't even work right and the instructions or some forum have me doing some mind-blowing workarounds I don't even understand.
So I think you are combining two different topics. I am all-in for libraries over frameworks. But the larger, more heavily curated libraries where you only need minimal customization are just objectively better. Having a large, curated standard library != a framework.
I'm lucky enough to have been around the industry for a while. I could probably count a dozen or more things that started out "Like X, only without all the BS!" --- only to end up with just as much BS or more than X ever had.
Reduced complexity is often a rallying cry, but I think the root of the phenomenon is in trying to find one's own social and professional standing in the situation where all the prominent positions are already taken and what little is left requires years of hard labor (complexity, certification, corporate review system, etc).
If this situation upsets you consider the alternatives, they might be worse.
Currently popular music mostly sounds like noise to me, but that's not the point. What are the current generation of musicians supposed to do, be silent and spend their lives listening to the great bands of my youth? It's impossible to match Pink Floyd in the style of Pink Floyd. They need a new style.
Facebook is losing traffic to whatever is the latest trend in social media, not really because people are suddenly paranoid about privacy, but because each generation needs a network where the previous generation is not.
And for as long as humans write programs, there will be a need to invent new languages, not because the old languages were technically inadequate, but because each generation of programmers needs a way to escape the shadow of the previous generation, the way acorns need squirrels to carry them away from the shadow of oak trees.
It's interest to see which parts of our civilization came which side of the divide. Market economy, for example is a great way for a young enterprising person to find their own footing away from the old (hence startups). Academia OTOH went totally the other way (hence grad school).
OTOH, the young always have a fresh perspective and they
usually have good ideas based on the times. They should be
listened to and mentored. Very few active older (1995+) folks left in IT ,after the method management and purposeful purging methods of the last 10 years, to mentor them. Most of us weren't great at teaching anyway. It was a paycheck.
Tech is ripe for applied group psychology and anthropology. The social, psychological, and anthropological factors are obvious to casual observers -- but completely invisible to the people they affect the most.
There's a reason for it, and perhaps overall it's a good thing....but that still doesn't mean that it can't be accepted and acknowledged as a facet of the community.
I try to explain why we do things the way we do and if they still want to try to change things, I make sure it's easy to go back again in case it fails the usual way.
> originally node.js was presented as a bloat-free alternative to "enterprise languages"
I'm also a long-time user of Dart, so when he brought that up, and compared TypeScript to its shortcomings, I definitely agreed.
It will be interesting to see what comes in the future.
If it were up to me (which I guess it isn't), I'd probably prioritize portability/less complex builds, built-in FFI, a flat module system, and optimizing calls to C from JS.
If you aren't context switching between your backend code and your frontend code (even when both are JS), you're probably incurring technical debt in your architecture to be paid in even greater numbers of dev hours down the road.
When you are writing an all-JS full-stack app, do you really feel like you're only working on a single app, as opposed to two different apps which happen to share the same repo?
Is it? I was working with a system where server is written in Erlang and
client (and another server) is written in Python. No problems with switching
back and forth.
> Yes, that's a cost your brain is paying.
What cost? I said I haven't noticed any.
> new devs [...] then need to learn and understand both Python and Elixir.
No, they don't need to learn even a speck of Elixir.
What you described is a trusim that one needs to learn two languages to write
in two languages. Yes, this is obviously true. What I'd like to hear is an
argument that switching between them used in a single system is costly,
because I haven't observed that. This is what I discuss with, not with that
learning another language has its cost.
There are more things you have to remember. Workflows in both languages. Of course it's more stuff, thus more context. And you have to use both languages constantly to stay fresh in them. The syntax isn't the only problem, just the easiest one.
Is it just as easy to maintain Spanish and English skills than just English?
"I don't notice it" isn't a very strong argument. I bet you don't notice the effects of slight dehydration and your diet and exercise on your output either. But if you were actually experimenting with it, I guarantee you could soon perceive it.
How wouldn't there be a cost of switching between two languages? Normally. You
could try, you'd know then. Though the prerequisite is a system that is
designed, with clearly designated borders between the parts, not a system
that has just emerged.
Proving something's non-existence is a little like proving that you're not
a weapons smuggler. How would you expect to even start?
> There are more things you have to remember. Workflows in both languages. Of course it's more stuff, thus more context.
But this is irrelevant to switching between the languages. You have just as
much to remember if you write unrelated things, each in its own language.
> Is it just as easy to maintain Spanish and English skills than just English?
"Just as easy than"? Really? In a thread about languages?
You picked wrong analogy. It is just as easy to write prose with every second
paragraph in English and Spanish as it would be with just English. The
prerequisite, obviously, is that you know both languages.
> And you have to use both languages constantly to stay fresh in them.
For some value of "constantly". It's not like people forget everything about
a language when they don't use it for a week or a month.
> "I don't notice it" isn't a very strong argument.
Well, at least it's some argument. On your side is only "how wouldn't there
be a cost?", clearly from a position of somebody who doesn't use many
Unless you flush the pages manually, your dirty pages (written files) live on long after your process died. Depending on system and configuration, minutes or even hours can pass before they are flushed to disk.
Also simplest HTTP request in express is handled in several ms and that's A LOT in my opinion.
Points of interest:
Schedule files to be loaded. This will usually load around 1000 files:
Whenever a file is loaded, iterate through the points in it, do the filtering and write the results to the output file:
Wait until all files were loaded and processed:
Wait until the output was written to disk:
What happens is that ~1000 files will be loaded in the background, points are filtered in the main thread and even while some files are still being loaded, we already start writing the results to the output file.
> I tried to process simple CSV files in a very straight forward (but async) way and got reading 200mb CSV and just splitting it to columns (with simple split by comma) takes ~10 seconds.
CSV may be much bigger challenge since it's ASCII data. That always tends to take multiple times longer.
On writing, the double coordinate values are transformed back into a fixed-precision integer format and stored in the output Buffer object.
I'm not generating an intermediate buffer since that does decrease performance a bit. It's directly from input buffer to output buffer. Output buffer is initially allocated with the same size as the input, and before sending it to the output stream it's cut to the actual size.
One thing I've previously learned and which has shown to be still true is that writing individual bytes to a buffer is faster than writing integers.
So originally I did this:
outBuffer.writeInt32LE(ux, outOffset + 0);
// do once
let tmpBuffer = new ArrayBuffer(4);
let tmpUint32 = new Uint32Array(tmpBuffer);
let tmpUint8 = new Uint8Array(tmpBuffer);
// do many times
tmpUint32 = ux;
outBuffer[outOffset + 0] = tmpUint8;
outBuffer[outOffset + 1] = tmpUint8;
outBuffer[outOffset + 2] = tmpUint8;
outBuffer[outOffset + 3] = tmpUint8;
Likewise, you don't need to copy bytes in your last code sample. You can copy uint32s.
Also, the stride from one record to the next can be anything, e.g. 15. An Uint32Array would need a stride of 4 or a multiple of 4 to be useful.
I could try to create 4 Uint32Arrays with byteOffsets 0 to 3, and with a view length of a multiple of 4, then use the one that works with the attribute I'm currently processing. Not sure if that's really going to be faster but who knows.
Not to take anything away from Ryan Dahl's ability to reflect and be open about what he considers his design mistakes, but it probably also helps a bit that he walked away for quite a while before coming back. A bit of distance helps in these matters.
edit: forgot Scala's SBT, admittedly a builder using Maven repo's but still an excellent example of how bad UX in this area can get.
I thought Ryan did a great job of explaining his regrets without giving the impression that Node was a "mistake", is "inferior", or anything so drastic.
They're ill-informed. GPP is correct that, for example pip is fundamentally inferior to npm , and those that insist on throwing shade at npm on HN should be corrected. They're wrong, and insulting a sound, well maintained project, without basis.
Preferrably by giving them better ammunition, since I do see NPM as substandard in quite a few ways, which is inexcusable when there do exist examples to learn from (whether it be a positive or negative influence).
First, it helps to clarify whether we are talking about npm the client or NPM the repository and ecosystem. Client issues are generally easily resolved, just use a different client. For npm, this could be yarn. For cpan, this could be cpanm, or cpanplus, etc.
If it's indeed the repository we are talking about, there are some obvious things that could be done to greatly improve it the NPM module ecosystem. For example, how about automating module tests against different versions of Node to determine whether it's in a good running status now for the current and prior interpreter versions, on the platforms it can be run on?  How about a prior version, in case you're trying to figure out if the version you're on has a known problem on the platform combo you're running on?  Or perhaps you want to know what the documentation and module structure looked like for a module a long time ago, like20 published versions and over a decade ago, because sometimes you run across old code?  Or as an author, the ability to upload a version, even for testing, and getting an automated report a couple days later about how well it runs on that entire version/architecture matrix with any problems you might want to look into?
In case you didn't notice the trend, I'm talking about CPAN here, which has been in existence for over two decades, and many of the features I've noted have been around for at least half that time. All in and for a language that most JS devs probably think isn't in use anymore, and on encountering a professional Perl developer would probably think they just encountered a unicorn or dinosaur.
Sure, NPM isn't all that bad compared to some of the examples that were put forth, but the problem is that those examples are a limited subset of what exists. Given the current popularity of JS and the massive corporate interest and sponsership, I frankly find the current situation somewhat disgusting. The only thing keeping JS from having an amazing module ecosystem is ambition. Sure, NPM might be a sound, well maintained project (points I think are debatable), but it could be so much more, and that's what we should be talking about, not almost annual fuckups they seem content with dealing with.
As with much of programming language design and implementation over the last 3+ decades?
below> that could have been mitigated or entirely avoided by surveying best practices
Yes, people who would spend their time working on language designs and implementations, should at least be familiar with the many surveys of best practices. Surveys of repos, type systems, memory layout, parallelism, and so much else. Language choices are intertwined and subtle, and adhocery has enormous downstream costs to the field and to society. The programming language design and implementation wiki exists for that reason. To accessibly distill our collective experience. Not using these resources is negligent - a disregard of our profession's responsibilities to society.
Oh, wait. Our field can't be bothered to create surveys of best practices. Or a wiki. Knowledge is inaccessibly dispersed among balkanized and siloed human communities, assorted academic papers, and scattered code.
Shall we continue to blame pervasive failure on individual language developers and tools? For how many more decades? At what point do we start addressing it as a systemic problem?
:) So I agree with your observation, but suggest the problem extends far beyond package management systems.
That was all I was responding to.
I definitely learned some cool stuff from your comment, and appreciate that, but my point was simply that the all the drive-by FUD that npm gets on HN is unwarranted.
> I frankly find the current situation somewhat disgusting.
This feels so hyperbolic though. The things you mention are cool 'nice-to-haves', to say not having them is 'disgusting' is a huge stretch in my opinion.
What I find somewhat disgusting is the massive amount of mistakes they've made over the years, and the time they've had to take to fix them, that could have been mitigated or entirely avoided by surveying best practices from other package management systems that have gone through the same pains.
2018-05-28 - ERR! 418 I'm a teapot (this is not a joke)
2018-02-21 - Critical Linux filesystem permissions are being changed by latest version
2017-08-01 - Typosquatting package names
(a little obtuse, but moderated package namespaces with
trusted maintainers can mitigate this, and spread load
from levenshtein distance checks.)
2017-11-03 - Visual Studio Code 1.7 overloaded npmjs.org, release reverted
(10% increase in NPM load, specifically to 404 pages,
causes NPM to fall over due to naive 404 handling and
apparently, poor ability to scale. Good thing they
caught it at 10% instead of the 200% it would have
2016-03-29 - changes to npm’s unpublish policy
2014-02-28 - npm’s Self-Signed Certificate is No More
2012-03-08 - npm (Node's package manager) leaks all user password hashes and salts
I mean, I would cut them a little slack if they seemed to have plans for making stuff better and a roadmap and it was just a matter of time, effort and resources they were lacking, but it seems to continuously be a case of them waiting until the shit hits the fan and they're forced to first take a look and see how to fix this new problem they've never envisioned, and then figure out their solution. Sure, it can sound hyperbolic initially, but I think that's just because people haven't really stopped to take stock of what's really going on here, and how it's not really getting better in any useful way. In the midst of emergency fixes is not how you should plan your new features. :/
I'd put it par with rubygems, ahead of pip, gradle, maven, a little bit behind mix, and far behind cargo. Not a bad spot to be by any means.
For the purposes of this discussion, it is useful to note that cargo was written by Yehuda Katz (wycats), who had previously written Bundler, and so actually had some concept of what mistakes he had made before and experience specifically in this area, in order to apparently (I haven't used it yet, but I have heard lots of good things) finally have built something truly great.
* The maintainers have pushed several breaking updates by mistake (I'm a teapot recently).
* There have been a few cascading failures due to the ecosystem (leftpad).
* node-gyp (alluded to in the talk) break cryptically on install in different operating system/package combinations. It also obscures the actual package contents.
* The lack of package signatures and things like namespace squatting significantly hurt the overall security of npm.
And let's not forget how terrible things were pre-yarn with the nested folder structure of node_modules and no lock file.
Compare that to NuGet where I've literally never had any of these problems.
Exactly reproduce a build at a later date
Part of it is technological (npm didn't have package-lock.json until very recently), part of it is organizational (the npm repository is surprisingly fluid), and part of it is cultural (the JS community likes zillions of tiny constantly-changing libraries). The net result is that I cannot walk away from a JS build for three weeks without something breaking. It breaks all the time. UGH.
It's still my favorite tech talk, very fun to watch.
Then a few minutes later says “I thought being able to specify URLs in import statements would be cute..”
Uhh...Houston, we have a problem with this one.
I could go on but you get the idea. A package manager is not simple and requires A LOT of choices. The best one I’ve seen is old CPAN.
Obviously it's not what you should do if you are publishing a library for others to use, but for local use, and the kinds of exploratory scientific computing he was talking about, it sounds perfect.
It sounds like he doesn't want me runtime to handle dependencies at all.
You can always reference urls with version numbers present in them.