Hacker News new | past | comments | ask | show | jobs | submit login
Lodash just declared issue bankruptcy and closed every issue and open PR (twitter.com/danielcroe)
240 points by omnibrain on Sept 16, 2023 | hide | past | favorite | 219 comments



I am so conflicted on this! On the one hand, I fucking love it. Who hasn't experienced a backlog grooming session where you know it's impossible to reach the end of the backlog and you just sort of go along with the process of scraping from the top. There's definitely something miserable about that feeling.

On the other hand, it's not like the issues are gone, they're just tagged differently, and if everything is just tags and organization then why not go with the flow rather than exerting control in an attempt to achieve this pseudo-perfect empty issue list. There's gotta be benefit in keeping those notes around and visible, right? Sort of feels like deciding to throw something out of your house while deep in a cleaning spree even though your subconscious is (reasonably) nagging at you to hold onto it because you might need it in the future.

On the whole I think I'm for it, though, if anything just for the cathartic release and presumed re-invigoration towards new issues.


For a bit more context, the creator is doing a full rewrite.

I think it would probably have been cleaner to release the rewrite first, then close all the old issues as deprecated. You could isolate the rewrite branch issues with a version tag. Closing then while the new version isn't done may lead to contributers opening new issues on the old version without realizing that it is no longer supported.

But he's not just ignoring the issues and leaving problems in the code base, the entire project is getting a refresh.

https://twitter.com/jdalton/status/1571863497969119238



Jamie ranted about GNOME devs because he was a few decades too early for the clown show that is frontend Javascript development. Once upon a time the noobs started with PHP and gave it a bad name with their insecure spaghetti code, now they start with React and build lovecraftian towers of abstraction that need to be rewritten from scratch every 6 months. Most of the JS frontend ecosystem is held together by people with 2 years of programming experience.

EDIT: wow, the testicle-in-an-eggcup has gone. Even jwz mellows down with age...


I think it's important to highlight how the myriad of bad ideas in JS and newbies implementing them really is no fault of the newbies. It should be obvious, but I wonder whether people reading posts like these interpret them as hostile towards the newbies. It's the technically challenged pseudo-leadership out there that is to blame for most of it. The running head-first into "DX" while still consistently having some of the worst developer experience known to man as a whole and not seeing this, this is a by-product of a largely inexperienced but overvalued senior layer in the tech world that find big voices either via Twitter or more directly via pseudo-lifestyle dev channels on YouTube.

Some of them flaunt credentials that would fall apart immediately in real conversations but they are the ones that host the conversation so it can't really come up. Not all dev YouTubers are like that, of course, but I'm sure a few come to mind as people are reading this.

On top of the above I think people aren't considering that software is often a reflection of values[0] and that people disguising theirs as "best practices" does not make them more valid than others. It's perfectly fine to not prematurely pessimize your solutions and that often requires making choices that some people will see as wrong because they have different values, for example. This is not only fine as a personal value but many businesses reach a point fairly fast where an ever-growing part of their work is performance oriented, despite what a lot of people tend to think. Solutions that see no real use or have no plurals tend to reach this point much slower or never than solutions that do.

The above is also interesting to think about; a lot of people don't value reasonably fast software because they've largely never experienced it. It's not an uncommon reaction to marvel at simple solutions that have no real optimization put into them but simply aren't written completely wastefully from the beginning, because they're so much faster than what people are used to.

[0] Bryan Cantrill - Platform as a Reflection of Values: https://vimeo.com/230142234


Probably not. HN began using rel="nofollow noreferrer" in links.


Oh yeah. This is going to turn out just fine /s No second system syndrome and whatnot.

Here is an idea: write the damn thing under a new name and abandon /offer a path to migrate.


The old version is not going anywhere.

I suppose the rewritten version will be a drop-in replacement with an identical / 100% compatible interface. (Else it's not a rewrite.)


> The old version is not going anywhere.

The same can't be said about the tickets and PRs for the old version though!


Lodash was 0 value to begin with, now it's going to the realm of negative value. Can't wait to debug compatibility issues of a convenience/utility library 5 layers deep into the dependency chain.


Just guessing here: I don't think it's going to be 100%. Sometimes it's not 100% even between versions of the same lib (breaking changes, major versions).

If it's good enough people will start using it and the old version will just fail into obsolescence.


Lodash itself started as a 'better' version of underscore.js, and ended up bloated and over-engineering exactly as the prophecy says. If anything, this 3rd system will hit the right balance :)


Lodash started because underscore violated semver by releasing breaking changes as a minor version update.


I like it, it's the "put everything in a giant mailbag, label it, and dump it in a storage locker" approach. If the issues are still relevant, they can be resurfaced.


"...and when resurfaced, they will be stuffed into the 4th mailbag for further 'resurfacing'"


Return To Sender. Oh, wait...


> No FP wrappers. That fad is over. RIP your co-workers if you introduced that headache into your codebase. Definitely not team or human friendly.

Can someone comment on this from the linked tweet? I haven’t been a js dev in a long time. Is functional style dead?


> Is functional style dead?

No, the javascript ecosystem is just full of people who have taken some pattern that occurs frequently in functional programming, such as currying, to itself be functional programming, and thus write wrappers and overly complicated libraries to implement that feature and call it a functional wrapper or the functional version of the API, despite those features having very little to do with functional programming.

Idiomatic javascript and typescript are full of functional programming, far more so than other major programming languages. Avoiding functional constructs would, in fact, make your code unidiomatic. So the functional style is about as far from dead as it could possibly be.

Additionally, not only has the ecosystem been adopting a more functional style as time passes (especially due to features that have been added such as destructuring and spreading, which heavily encourage a more functional style), some of the core APIs in the standard library that used to make (pure) functional programming a bit more awkward by mutating their arguments now have pure equivalents [0][1] as well.

[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> currying Ugh. It seems like such a combination of shallow, weird, and useless to me, I find it hard not to stop reading any "Intro to FP" article that sings its praises as one of the first things.

The big deal about FP is that you can see more easily what the damn function will do when you call it. A weird and long-winded notation for multiple inputs contributes jack.


But lodash has a curry wrapper? So I'm still not understanding what functional wrappers the author is talking about that are so problematic.


reduce() abuse is a peeve of mine.


What got me to stop using reduce was when I and another sr. held a theoretical "gun" to a jr devs head and asked him what a block of code involving reduce did and they didn't have a clue. They were pretty bright; so I just threw it the towel at it being useful to the health of a large codebase after that.

Still love map. But reduce can feel very "code golf" real quick.


So. That. https://blog.pwkf.org/2022/09/18/always-optimize-for-dummies...

"Easy to read" means "easy for others". Current efficiency is usually borrowed on future one.


Which junior devs?


I’m confused. Do you not think that would have been a good opportunity to teach that person what folds are for?

Fold/reduce is fundamental. It’s weird to me that its use would be discouraged or outlawed somewhere.


If your language has a `for` keyword, I would just say use the `for` keyword.


But why?

The language I use has various flavours of `for`. It also has various flavours of map and fold. It also has recursion.

They each do different things. Those differences are meaningful and useful. Why would you limit yourself to just using `for`?


It depends. It isn't the case for Ruby, for example.


How would you abuse reduce()?


MDN even had to make a whole chapter to discourage overuse of it.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Using reduce to perform a map is a fun way to annoy colleagues.


Or .map() as a for..of replacement. I always wanted V8/SpiderMonkey developers to make .map()/.reduce() "pure" in the sense of "optimizing away" the expression if its result is not assigned to anything, and see the world burn.


I think .map() as a for...of replacement makes sense for simple cases. newArray = oldArray.map(el => aFunction(el)) is short, sweet and clean.


It's definitely short, but are you doing it for the side effects? Then I find .map() misleading. I'd much prefer a .forEach(), or a for...of.


If you’re mapping for side effects you deserve a good rap on the head.

The main value of “map” is to denote a 1:1 data transformation. Plus this gets very fraught with many langages mapping lazily, and some not guaranteeing the order of operation.


Use it at all, presumably.


I'm probably on the mild side of abusing it, I've never seen another developer on my team use it, but I also include self-describing variables for the values. If the reducer function itself is non-trivial, I've give it a name so the final product may look like:

    const idNameMap = itemList.reduce(toIdNameMap)
It should be intuitive that itemList is a list of objects familiar to the context of the code, and the result will be like:

    [{"id1":"Billy"}, etc.


It’s really not clear to me what the transformation is, but from the naming and result it mostly looks like it’s an abuse of reduce and should be a map, possibly feeding into an Object / Map constructor / factory?


> It should be intuitive that itemList is a list of objects

…But how?!

I can't see how anyone could build an intuition for this opaque line and the example of its apparent result. It's confusing enough that `idNameMap` actually refers to a list (or array), and not a map (or object).


I used to always use .forEach but now I use “of” as much as possible.


Map, reduce, and forEach are each for quite different things. When you say you always used to use forEach, are you saying that you used it for pure computations? Or is near enough everything you write effectful?


Yes. In Scala people seem to say it's the idiomatic way to sum a list of numbers which never felt right to me. It just seems to much code to look at for that.


Summing a list of numbers is the exact kind of use case you should use reduce for. If it feels wrong to you it's just because you have more experience with for loops than reducers.

In languages like JS that have both styles, the place you probably want to use for loops over reducers is for more complex operations where mutation can simplify the code. For example if you have 10,000 elements in an array that you want to aggregate into 1,000 properties on an object (with a non trivial relationship between array elements and object properties), it's probably going to look nicer in a for loop than it is in a reducer with a bunch of spread and ternary operators.

IMO .reduce calls should probably be one line. If not then it's time to either write a for loop or compose more smaller functions.

Avoiding them entirely is just FUD though. To compare summing an array in JS:

  let sum = 0;
  for (const n of numbers) {
    sum += n;
  }

  const sum = numbers.reduce((acc, n) => acc + n, 0);
The latter is much nicer to read if you're used to both styles. What I particularly like about it is it clearly signals that 'sum' is a variable that we will be using further down. The for loop style leaves ambiguity as to whether 'sum' is going to be used later or if it's just some context for something that's happening inside the loop.

Actually, to throw a hot take in the mix here, I think people should be more accepting of multiple statements across single lines in C-style languages.

  let sum = 0; for (const n of numbers) sum += n;
That would be a perfectly fine way to signal to the reader that "this is a specific line of code to produce a sum value" (much the same as the reduce example), but people seem to have a dogmatic aversion to leaving braces off if/for statements or putting multiple statements on a single line.


I disagree.

The first version is something most programmers could read and understand right away.

I also dont understand why the second version signals use of the variable any better than the classic version, esp if you declare the variable right ahead of it, but that is the classic "C" style.

How ever in a real system I would hope people use better names than sum. which would give it some comprehension of what the variable does / exists for.

What is the obsession with one liners? Perhaps if paper was quite expensive and it was printed out paper on a regular basis you could save a few sheets, I dont think one long line is easier to read than several short lines.

I suppose if everyone involved with the project prefers or is used to the second version it is fine.


> I also dont understand why the second version signals use of the variable any better than the classic version, esp if you declare the variable right ahead of it, but that is the classic "C" style.

Because you have no idea if the variable is used further down in the scope. Compare with something like parsing a csv file with support for quotes:

  let rows = [];
  let quoteMode = false;
  for (char c of csvFile) {
    ...
  }
rows is the end result, quoteMode is state only relevant to the process happening within the for loop. There's nothing past variable names to distinguish their intended use. Whereas:

  const rows = parseCsv(csvFile);
makes it totally clear.

The reason something like reduce can make for nicer code is because it lets you do simple operations (probably not parsing a csv lol) inline in a way that makes the code just as clear as a function call would be, without adding a bunch of extra layers of abstraction (meaning more scrolling and flicking between tabs).

Really it's not about the reduce function at all, it's about the "const foo =". Once you see that, you can completely disregard the rest of the line if you've already read it and/or are confident you know what it's doing. It acts as a refresher in a way that a for loop doesn't. After you get past the inital shock of seeing a scary one liner (which get less and less scary as you see them more), it makes the next 20 times you read it much more pleasant.


I just expect it to be my list.sum() or sum(mylist) and to trust that the implementation under the hood is reasonable.


It definitely is the idiomatic way. How would you do it otherwise? Recursion?


in Scala the idiomatic way to sum is the method sum on collections List(1,2,3,4).sum

Although this is indeed implemented with reduce. But is it really too much code? List(1,2,3).reduce(_ + _)


What percentage of the now closed bugs still extant in the current version will be reimplemented in the rewrite?


Given the history of rewrites, I would estimate between 150-225%.


Requirements are hard, but ... you don't need bug reports to implement bug free requirements. So the only value the bugs would have is if they are for behaviors that should be documented as requirements.


63.19%


The CADT model continues to demonstrate its relevance.


As a user, I find such a “cleanup” problematic if it means closing the issue. If the issue still exists in the product, it should remain open, for documentation, and so that users running into it can easily find it and add to it. I find it more honest if a project acknowledges that the issue exists by leaving it open.

As a developer, I’d rather use a filter to hide older non-critical issues, if I’m bothered about seeing the amount of issues.


Closed doesn't mean resolved, it means closed. Closing issues just shifts them to another tab, doesn't remove them entirely. What you propose, a filter to hide these, is exactly what has been done just through the official open/closed filters.


This is not accurate. When searching for issues, users won’t find closed issues by default. When issues are linked, closed issues are rendered differently (struck through). You can’t comment on them anymore without reopening them, and reopening them is often discouraged or not enabled at all. It is a drastically different state.


Either you are thinking of other project management tools or assuming that some of the "stale issue locking" and automated issue management tools that some larger projects use are default.

On GitHub, it is true that the Issues search shows only open issues by default, but I think users are quite aware that they may need to search for closed issues, especially since it's unlikely they are on the absolute latest version of a package, especially when debugging problems in production. Additionally, some projects close issues once the mainline has addressed them even if the fix isn't in a released version.

GitHub does not render closed issues as struck through, and it does not by default lock conversations on closed issues.

EDIT: I am not taking a side on whether or not it's a good idea to do this sort of "bankruptcy" mass close.


> Additionally, some projects close issues once the mainline has addressed them even if the fix isn't in a released version.

And that is a general issue with the maintainers not understanding what they're doing, or not understanding how issue trackers work.

If you close an issue that is still present in the current release, then you're doing it wrong - with one exception. If it is in fact functioning as intended and the issue was just a misunderstanding, then explaining that and closing the issue makes sense. (But even then you might want to update the docs or helptext or something to prevent future misunderstanding, if it's confusing enough.)

Closing an issue just because you don't feel like looking at it today just means that someone else will open a new issue tomorrow to report the same thing. And any history, workarounds, steps to replicate, etc. in the old issue will be lost/disconnected. That's not doing anyone a favor.


> > Additionally, some projects close issues once the mainline has addressed them even if the fix isn't in a released version.

> And that is a general issue with the maintainers not understanding what they're doing, or not understanding how issue trackers work.

> If you close an issue that is still present in the current release, then you're doing it wrong - with one exception. If it is in fact functioning as intended and the issue was just a misunderstanding, then explaining that and closing the issue makes sense. (But even then you might want to update the docs or helptext or something to prevent future misunderstanding, if it's confusing enough.)

The concept of an issue tracker was not bequeathed to us from on high. There is no single "right" way to do it, nor can you (context free) tell someone they're doing it "wrong".

Some people use issue trackers to inventorize bugs in current releases. Some people use them as a dev's todo list. If you do the latter, closing the issue once you've implemented the fix is perfectly reasonable.

> Closing an issue just because you don't feel like looking at it today just means that someone else will open a new issue tomorrow to report the same thing. And any history, workarounds, steps to replicate, etc. in the old issue will be lost/disconnected. That's not doing anyone a favor.

It feels like it's doing the developer a favour.


In my experience closed issues are usually closed with an associated PR and the issue it fixes gets merged into main/master.

I've never actually seen a project only close issues when they make a release. Rather they make a new release when there are enough resolved issues or new feature PRs. Though of course they can do whatever they want. There are a lot of projects where it is known that the latest git main should be used and the latest actual binary release is from years ago.


I keep issues open until a release resolves them for all my open source projects. It helps to prevent duplicates, helps me maintain good release notes, and ensures the open issue list is a more honest description of the software you get when you fetch it via your package manager.


> On GitHub, it is true that the Issues search shows only open issues by default, but I think users are quite aware that they may need to search for closed issues

I think users are quite unaware, given how we have no evidence they're quite aware, and the normal thought process is "closed == resolved" (fixed, wontfix, etc)

A normal thought process would be to close individual bugs as you confirm they aren't present in the new version – the end user already bore the burden of writing a bug report, and you owe it to them to actually determine if the issue is resolved before closing, even if the resolution is "wontfix".

Closing bug reports without actually caring about whether they were resolved is giving the finger to your users. Why even have bug reports at that point? As far as users are concerned, you'll just close any new ones for some new reason anyways, based on your track record.

Now: new version, all bugs closed. Next: new name, all bugs closed. Then: new logo, all bugs closed. After all, you close whatever bugs you want, whenever you want, for whatever arbitrary reason you want. And why not? It's your repo. Why does it matter if the bugs are fixed or not when you close 'em? You got a new thing! Close all bugs!


> and you owe it to them

Stop right there. It's an open source project. You owe the users absolutely nothing.


> You owe the users absolutely nothing.

Stop right there. When you ask people for feedback, be it 1-on-1s or bug reports, ignoring the feedback is ruder than never having asked in the first place.

If you decided to ask for feedback, and they're kind enough to take time out of their day and spend their efforts to help you out in the way you asked them to help you out, then yes, you DO owe it to the provider to not then tell them to bugger off with said feedback.

If you were going to do that, you should have disabled bug reports from users, instead of asking users to help you by providing feedback. Here is a helpful tutorial on doing that on GitHub: [0]

[0]: https://docs.github.com/en/repositories/managing-your-reposi...


If you perform marketing and/or evangelism for your project, it's not at all obvious that you owe your users nothing.


The act of marketing software doesn't invalidate the license, which often explicitly states that the software comes as-is and free from warranty / support.


Offering an as-is disclaimer doesn't rid someone of all social, moral, or legal responsibility.

Saying things that conflict with that disclaimer does indeed chip away at the effect the disclaimer has on your responsibility.


Sure you do. Anyone asking for feedback owe to respond on said feedback.


Allowing feedback != Asking for feedback


opening your feedback system to users == asking users for feedback


I agree with most of this, but I do think that other than the worst of the users (which admittedly are much more visible than the better ones) do in fact understand how to navigate closed issues.


Wait what? Since when can’t you respond on a closed issue without reopening it? I’ve partaken in lengthy discussions on closed issues without them reopening. Obviously you can’t reply to locked issues of course, but that’s a separate thing.


most of that is not really true. where are closed issues rendered struck through? It's up to the repo to lock closed issues or not, and it doesn't appear this repo did that.


In my project a ticket that is open / closed / tagged timbuktu means whatever I want it to mean.

Personally I am not going to assume that people are competent enough to check the open issues but too lazy to check the closed issues, or simply too dumb to understand a comment like "closed because I am not going to fix this".


Closed does mean resolved. One way or another. If it's not resolved, it's still open. If the ticket was improperly marked closed, that just means someone else has to create a new ticket for the unresolved issue.


"wontfix" is a perfectly good issue resolution.


I started closing issues unceremoniously with a maybelater tag.

If I won't be working on it, if no one is working on it, I won't leave the bug open.

Either someone shows up and decides to put up the legwork or it gets closed, even if it doesn't cross the threshold of wontfix (i.e. I won't accept a fix).


Closed doesn't have to mean resolved and the tickets would still be available either way. Closed is IMO ideal for these tickets as ultimately it means that they aren't being looked at.


Lots of projects on GitHub (not sure about lodash) automatically lock closed issues so you can't comment on it anymore. That closes an avenue for those that are impacted by the issue to discuss fixes, workarounds and alternatives.


Closed implies that a decision has been made that the issue is somehow not worthy of consideration anymore. That’s what bugs me as a user.

Whether any given ticket is being looked at can change over time. Closing it is a way to drastically reduce the likelihood that it is being looked at. As a user, I see no good reason why that should be done, in particular when it is applied across the board without considering the individual issue.


Closing means it's not worth of consideration at this time, it can always be reopened.

As a user you obviously see no good reason for that. As maintainers having more bugs open than will ever be worked on (e.g. because no one will put up the legwork), serves no good purpose.

PS: individually considering each of 100s of issues takes time and mind share, that can better be invested elsewhere.


Closed does mean resolved. One way or another. If it's not resolved, it's still open. If the ticket was improperly marked closed, that just means someone else has to create a new ticket for the unresolved issue.


But they should be looked at. They aren't because we don't have infinite time every day to look at issues.

If those issues are still a problem nobody will find them, they'll make new issues and lose all the previous history, unless the UI is designed to support automatically suggesting reopening an old issue instead.


It's not necessarily true that they 'should' be looked at. Most old projects that follow the 'never close without verification' philosophy end up, eventually, with a majority of open issues not actually reflecting the current state of the project.

A middle-ground might be to leave open the issues with recent activity, with the idea being that at least the ones that are affecting people the most are not swept under the rug.


> Most old projects that follow the 'never close without verification' philosophy end up, eventually, with a majority of open issues not actually reflecting the current state of the project.

There's a crucial question here that I find not enough people ask themselves: does that actually matter? And if yes, why? What, specifically, is the material problem caused by this situation?

It often feels like people are just chasing 'inbox zero', under the assumption that "0 open issues == good", without any actual material problem being solved in the process.

(Github probably isn't helping here, with their undeservedly prominent placement of the open issue count driving this sort of behaviour...)


I'm also conflicted. On one hand I reach for it every time I'm back in JavaScript land. On the other I think that slightly less than half of this library should be in the standard library for Christ's sake.


Isn't this a task that Large Language models should excell at? summarizing tickets.


Yeah, it’s why Basecamp doesn’t keep a backlog. Anything important will come up again.


Only until users notice, and decide that reporting bugs isn’t a good use of time.


As jwz put it

> This is, I think, the most common way for my bug reports to open source software projects to ever become closed. I report bugs; they go unread for a year, sometimes two; and then (surprise!) that module is rewritten from scratch—and the new maintainer can’t be bothered to check whether his new version has actually solved any of the known problems that existed in the previous version.


Wouldn’t it be more reasonable for projects with few maintainers, or even run solo, to have the individuals who reported a bug to each spend 10-15 minutes to validate if the issue still exists instead of the sole maintainer having to spend days if not weeks to do the validation?

Especially considering that the reporter is likely better attuned to observing the bug if it’s a bit more fickle or complex than the maintainer who may or may not be able to reproduce it on their end to begin with?

This might shift a bit when there’s a bigger team involved or when it’s a companion project to a commercial service.

What I tend to see with many (F)OSS projects is that very few are willing to roll up their sleeves while simultaneously spending significant amount of time to make it known how essential the project is to them, making demands and giving copious suggestions.

The vast majority just create new issues to make their wishlist known and/or supply a very vague bug report.

A subset of them might bother to submit a decent bug report and that’s about the extent their willingness to contribute goes.

Personally I don’t have the character to maintain a project simply due to not having the will to handle most of the comments I often see in a diplomatic manner.

That said, I do always try to track down the cause of an issue and submit a PR to fix the issue I’m reporting in an effort to do my part.


And a variation on that: the issue was reported against one major release, but a new major release (just incremental changes, not rewrite) is coming out, so close all issues against the old release, because the problem might no longer exist.


Or simply:

> report issue

> issue is automatically closed 6 months later due to lack of activity

...with absolutely no change to or comment on the issue.


Make a PR with some tests that should pass, but are instead marked skipped. That way when the rebuild comes around, there's an easy path for the rebuild to see how it's doing, see if things are at all improved.

If you care about it, you should put a test on it.


How would help with jwz’s issue?

He’s specifically talking about projects that just reimplement all the bugs from scratch in a new, incompatible code base. Such teams certainly don’t port passing unit tests, let alone skipped ones!

For the full article, search for: jwz cadt


I’m probably not enough of an expert on their code to write that unit test. And I don’t have much faith in a team who blindly closes bugs to unskip tests and upgrade them to the new API.


It's like that with security findings, too.


Also my experience for closed source software projects.


This is bullshit. I've submitted hundreds of bugs to open source projects. This situation happens but it is extremely rare. Much less common than the bug being fixed, ignored, or closed by bloody stalebot.


John-David Dalton, the lodash author, wrote [this last year][1]:

> For the lodash rewrite I’m declaring tech debt bankruptcy. Starting from scratch with TypeScript and Rollup. No FP wrappers. That fad is over. RIP your co-workers if you introduced that headache into your codebase. Definitely not team or human friendly.

Don’t know if he’s sticking to this 100% but seems pretty close.

[1]: https://twitter.com/jdalton/status/1571863497969119238


> That fad is over. RIP your co-workers if you introduced that headache into your codebase. Definitely not team or human friendly.

Is he not the original author?

This is phrased like someone else added the complexity he's decrying. If he's the one that introduced it then instead of talking about it as an introspective or something that they learned from, he's talking about it like he develops projects according to fads and that's it's time to move on to the next one now that this one has ended.


lodash/fp is an optional distribution of lodash that did what the core library did, but did so in a more flexible, powerful, composeable way that makes it easier to construct powerful functions. it was separate from the core, but based heavily on it. https://github.com/lodash/lodash/wiki/FP-Guide

at the time, nothing was settled. we were in a pioneering mode of building; we didn't know what people would find useful or what the future would hold. there were a lot of different ideas floating around, and lodash was trying to stay the same while also offer a port to this barely-subtly-different paradigm, to see what value might be found there. saying that "introduced" it feels like a crude reduction to me; he allowed people the option they asked for.

i personally think fp - in particular - "pointsfree" fp - has huge down sides to being understandable. but fp in general also is a much more succinct and capable way of expressing things, and multiple times a week i run into situations where auto-currying or reverse args would make the code i write much cleaner & not damage code comprehension.

rather than call fp a fad, & insult the author for ever letting it in, i think there's room to say that it's sad that js had to stay on the lowest common denominator. the future was unable to be changed, the old ways stuck. we lost some really good opportunity & capabilities. that said, i still think the pointsfree style is hugely damaging & responsible for greatly reducing the chances we had to improve. instead, we're not "moving on", we're going back to square 1, to the only thing we've ever known or done. that makes me a little sad, to have the pioneers pack up & move back into the city.

what really scares me is the attitude that every failed pioneering expedition is a "fad" and that we shouldn't ever try things. lodash-fp was a harmless small token offering to possibility, and should be respected, whatever your view on fp.


What were FP wrappers and what was the fad?


One of the big issues with the fp javascript trend was that coding patterns which work well in languages like closure or Haskell had a kind of "let's shoehorn this into js". There was a tonne of blogspot on "how to write functional javascript" that followed this pattern of currying all the time in a language where it just doesn't result in particularly readable code. I'm glad it's falling out of fashion.


One of my subtle peeves is people who loved doing A Thing in one language feel like they have to cram it into the next language they move to, whether or not it works. Python is becoming a victim of this weird habit.


"...the determined Real Programmer can write Fortran programs in any language."


And you know I originally made that mistake! Only with Pascal. I was hoping for functions and procedures in Python, but I eventually had to learn that Python isn't about that. I had to take the language on its own terms.


Pascal was a peak programming language IMO, everything went downhill from there ...


What's bad and happening to Python? If it's type hints, they're an invaluable addition for large codebases IMO, and they're entirely optional unless you set up tooling to enforce them.


I will never make the tactical mistake of pointing to a single thing. Too easy to bikeshed.

No, I just keep pointing back to the idea that there should be one obvious way to do "it," whatever it is you need. As people vote for their favorite whatever from another language, the number of options of how to do something increases, drastically. The reason for a "language of mostly idioms" (maybe not to "Shaka, when the walls fell" level) is that we must read and maintain our code.

I am a Perl refugee. It had the opposite philosophy: many ways to skin a cat. The result was a write-once, read-only-in-abject-terror language. You never knew if someone decided it was Code Golf Day and they wanted to try to cram in a dozen things into a single string of executable line noise.

I think Python is (slowly) abandoning some of the PEP 20 precepts one hardly-objectionable feature at a time.


> I will never make the tactical mistake of pointing to a single thing. Too easy to bikeshed.

If you won't say what you're talking about then please stop commenting here.


I think you missed the point re: bikeshedding


The way that gp said "tactical" indicates that their interest is in winning personal battles rather than promoting interesting community discussion.


I was hoping for forestall pointless discussions.

Any single feature recently added would find an advocate and a defender. Therefore, selecting one of the bunch is a mistake because it does not focus on the actual problem: the pack of features, plural, getting added in.

It is like someone dumping a ton of sand in your driveway. "Surely you cannot object to this grain of sand. Or that one. Or this one over here." The problem is in the aggregate.


I would say the most noticeable version of this happened a while ago when clearly a lot of java programmers were working on the standard library and you wound up with a lot of heavily class-and -inheritance-based code which really did not need to be done that way in Python.


"Auto-curried function style (reversed arg) wrappers"

https://twitter.com/jdalton/status/1571870137690591236


Calling these "FP wrappers" is doing quite a disservice to functional programming, which by no means necessitates that functions are curried, and which is basically the idiomatic way of writing javascript / typescript these days. You don't need (and probably shouldn't use) wrappers of any kind to write perfectly ordinary FP-style code in javascript / typescript. I'd hope people wouldn't associate such wrappers with FP, because what they accomplish is something relatively orthogonal to FP.


Absolutely. I use Haskell as my main language for personal projects, and I like FP a lot — but I’ve seen some really horrible stuff marketed as ‘FP’ lately, especially in languages like JavaScript.


And what is that?


https://github.com/lodash/lodash/wiki/FP-Guide

Basically flipping around the args in a function and returning a "curried" function call. So instead of

    _.map(["1"], parseInt)
it's the other way around,

    _.map(parseInt, ["1"])
That way you can do something like

    const parseArrayFunc = _.map(parseInt)
    const parsed = parseArrayFunc(["1"])
This pattern is called currying in functional programming.


Partial application makes sense to me as a useful computer science concept. Reasoning about functions when they always have 1-arity makes proofs possible.

In software though, it just seems like code golf for people who can’t or won’t write a function as a way of sharing code between call sites:

  def serialize(fn, data, path):
    …

  yamlize = partial(
    serialize,
    yaml.dump
  )

  jsonize = partial(
    serialize,
    json.dumps
  )
Presumably some of these people wake up in a cold sweat about the shame of typing “partial(serialize, …)” twice and do this:

  def serialize(fn, data, path):
    …

  a = lambda f: partial(serialize, f)
  yamlize = a(yaml.dump)
  jsonize = a(json.dumps)
A good example of Don’t Repeat Yourself mutating into Never Repeat Yourself with ill effect, because the developer has self flagellated so much that the following code is deemed to have too much repetition in it:

  def write(data, path):
    …

  def write_yaml(data, path):
    write(yaml.dump(data), path)

  def write_json(data, path):
    write(json.dumps(data), path)


The complaint, I would think, being with the machinery to make _.map return a function or a value depending on the number of args, not with the flipping/arg order stuff?


Yeah I’d think so too. FP languages like Haskell or F# has partial application as part of the language itself and you get it for free by just ordering the parameters that way.


Specifically I don’t know, though I do know what currying is. This auto-currying thing sounds like a reaction against some parts of JavaScript where you can get burned by default arguments.

A curried function is like a partial application for each argument. If mul:=(a,b)=>a*b then currying would be being able to say double:=mul(2). Currying everything and requiring all function calls to have one parameter per call — six:=mul(2)(3) — might have some benefit in avoiding bugs.

Generally speaking though my hunch is that the lodash author grew tired of the project’s focus on meta programming: noodling around with JavaScript using programming paradigms that hinder rather than help its users. Dropping the noodling and focusing on features is a “back to basics” moment, if you will.


As an illustration, I wrote a Sokoban game in that style (no mutation, no side effects, partial application everywhere). See https://github.com/Michael-Zinn/fpsokobanjs/blob/master/game...


Functional programming wrappers? I recently discovered Ramda JS as an alternative to Lodash / Underscore and it's surprisingly fresh to write in. It can get a bit complicated sometimes, but that's when GPT-4 comes to the rescue...


IME there was a fad (that thankfully never picked up) where people were talking about making giant unreable piles of curry functions and calling it finally, some good functional js.

They looked like write-only code to me and at high-risk of assaulting the GC (aka slower than lodash).


I’m truly under-educated on JS issues. What is wrong with FP in this context?


Nothing really but besides it not working with typings too well it’s just quite uncommon. I love it personally but strictly use it on personal projects, this is really not something one should introduce to a project that multiple people are working on.


Well done!

Looking at the first few PRs in the list, you can see things like:

- Adding a word in a comment.

- Adding config files to self-promote dev services.

- Switching from using var to let.

- Changing well-established behaviours of core functions.

- Removing semicolons.

- ...

I'm sure most of those were opened with good intentions of improving the library, but at some point for the maintainer they just become spam (or worse, a growing burden that makes you feel guiltier and guiltier for not giving it attention).

Celebrities hire bodyguards and fly first class (or private) to avoid the constant stream of attention and keep sane. I wonder what measures could be taken by celebrity OSS projects.


Personally as an open source maintainer I like the typo/wording fixes/automated refactor PRs the most. There's almost no effort needed from my side to review them, so I almost always merge those very quickly. It's the PRs that implement huge are the ones that you take the most time reviewing/discussing, and thus those are the ones I put off looking closely at.


The submitter creating multiple var -> let PRs (one PR per file...), was also doing this, including the one file per PR, in other projects, and would've broken some of their legacy IE(!) users.

https://github.com/MithrilJS/mithril.js/pull/2880#pullreques...

That's a particularly obnoxious bot. Didnt even follow their workflow...

I suppose it depends on scale, and the team size.

And Github issues are 80% support forums, 20% bugs, 5% of the bugs come with reproduction test cases, if you're lucky.


Yeah, okay of course you can take it to far. But in the general case I would not want to discourage people from opening "trivial" PRs. Since those are the PRs that cost me the least time to manage, while still improving the project (even if it's by a small amount).


There was a trend for a while for people to open trivial PRs against popular projects, I believe in an attempt to pad their resumes.


They can keep padding their resume if it means little fixes actually happen. Even typos and little translations, they affect the general reputability of the software. Sounds like a fair trade, especially as most hiring companies can just look at PRs and see what was actually changed very easily.


This is still a trend unfortunately.


It's called Hacktoberfest.


Hacktoberfest is coming


> Switching from using var to let.

One particular GH user submitted multiple PRs doing only that. Across multiple JS projects...

...feels like GH profile padding.


Not so much padding the profile as making the github contribution graph more solidly green.


The bigger news is that Lodash is migrating from Node.js to Bun: https://github.com/lodash/lodash/commit/97d4a2fe193a66f5f96d...


Woah.

I was starting to get excited about migrating my packages to Bun, but hesitating because I was afraid of compatibility or whether Bun is really here to stay or not - I have been slightly burnt by "Modern Yarn"; but honestly, seeing lodash make the leap makes me want to consider it more seriously now.


I’m a yarn burn victim as well, and it has made me a lot more apprehensive about fundamental decisions like these. They seem so promising and your research seems to check all/most of the boxes… Then you get Yarn 2/3


Yarn berry has been amazing for me, what problems are you facing with it?


I think if you start from scratch with modern yarn, you’ll generally have an alright time. If you migrate old yard with third party workspaces to modern Yarn with first party workspaces, I think it could be a lot more frustrating. This might be better now due to improved documentation and bug patches. A few years ago it was quite frustrating.


Hmm. If I recall workspace migration was pretty straightforward for me, I thought you were going to bring up PnP, which was not well supported in the beginning. It was frustrating having to change so much of the overall workflow in just the second major version update, and this is exacerbated by their SEO issues, as yarn classic docs still show up before the new docs in Google searches.


Yes, PnP threw a real stick in things. Did you migrate to workspaces while using TypeScript? We had major issues with TS, eslint, and I believe prettier all cooperating with each other and other tooling. It was a nightmare for a week or two. Oh, and React Native was a total dead end for a while. Certain libraries in that ecosystem completely shit the bed with workspaces. Man, it's all coming back. Admittedly, React Native's ecosystem was an equal or greater contributor to that suffering. Even on my team at the time, we resented React Native far more than yarn for all of those struggles. We needed to patch a lot of libraries and write way too many pull requests to repositories that were heavily used yet bizarrely under-maintained and unresponsive.


I stay away from TypeScript where I can, preferring JSDoc, and the PnP fiasco was a great example of why. TypeScript's lack of support for the standard node module resolution algorithm is the reason things don't play well. AFAIK there are still issues there. I also avoid the React ecosystem for similar reasons.

Overall, I was blessed not to have to migrate any work TS/yarn repos but my own personal stack was hell for a while. Really glad things have mostly shaken out because yarn is definitely best in class now and is a joy to use. Too bad we're probably all going to end up on bun anyway.


For me I can’t imagine using anything other than Yarn 3 for Javascript dependency management


I think if you start there it’s potentially great. Going through the early migration from classic to modern was where the burn occurred in my experience.


I was using Yarn 1 before, and it was a nightmare. Slow and painful to package applications with lots of dependencies and to deploy.

So yes I did migrate to Yarn 2+, and it made CI/CD much easier and faster.


What's wrong with yarn 2+ (berry)?


My issue is the substantial shift in the API, major bugs and poor documentation in the initial workspaces implementation, then more subjectively, I didn’t like the PnP solution though it seems to work more reliably now.

I worked on a very large monorepo that was using yarn and the headaches that occurred from migrating to workspaces and newer versions of Yarn were absolutely brutal. In many ways things were better and other package managers technically couldn’t offer the same features, but there’s a reason packages like Lerna/Nx/Turborepo were used to accomplish similar things despite it being possible with yarn. It was extremely cumbersome, didn’t work intuitively with TypeScript, felt like a house of cards at times, etc.

I understand it’s better now, but getting hit by that transition was a burn if I ever saw one.


Update: Welp, isn't really working well on windows yet so I'll have to wait anyway.


Huh, that is big news. Especially considering that despite the v1.0 release recently it has performance problems on Windows.


I resist telling open source developers how to run their projects, because I am one myself, and get annoyed when other people do that to me.

But oof, if I were a user who spent some non-trivial time filing an issue & helping troubleshoot, or working on a fix or new feature and submitting a pull request, I'd be feeling pretty discouraged right now.


Most users spend trivial time filling issues however. Most bug reports are crap.


Well parent wrote about a user who took his time and put in effort.

No one wants to loose those so it is sad for both sides.


In my opinion, if a bug report is crap, it is perfectly acceptable to politely say so with a canned response and then close the issue a week later if better information is not forthcoming. Declaring 'issue bankruptcy' shows that you do not care about your users time and tells me that I should not trust your project.


The issues are closed, not lost. They can probably ask them again later if they are still pertinent



Issue bankruptcy is real. Speaking from personal experience, at some point maintaining OSS is incompatible with living in the real world. The work is free and often thankless (but not always). The problems are complex and compete with real world responsibilities, like work, family, rest. People get pissy and regularly want to argue with you or read the docs for them. It also carries enormous responsibility if your OSS is popular.

I am anticipating a huge collapse in the OSS ecosystem as the people who drive them say "fuck it" and walk away for more important things.


That I can understand. These days I'm almost starting to think of nontrivial personal projects as more an addiction or self harm than real projects.

The OSS ecosystem is insanely inefficient though. There's lodash, underscore, and plenty of others. They don't all need to exist. There's also lots of libraries that only do one thing, which are mostly a subset of something like this.

Most of this stuff will be used with minifiers and tree shaking, and unused features will be removed, we don't need lightweight libraries when the heavy ones are easily optimized and mostly made of separate parts, so the dev effort mostly scales linearly.

I think OSS has plenty of manpower, even if everyone decided to only spend a quarter of the time they currently spend, if it weren't for the fact programmers like elegance and simplicity more than anything, and constantly want to rewrite to make things just a little bit better.


Lodash is a great library. It's something I wind up using a little bit of on almost every project I work on.

However, as JavaScript has become better and better, I use Lodash less and less. Every time I catch myself using it, I make sure to go see if there's a built-in for what I'm trying to do. I also find myself commenting on PRs with links to "you don't need Lodash for this" quite often.

I still hope this isn't a sign they are stepping away from the project.


It's definitely convenient but I think invariably I would prefer to just roll my own versions of its utils if it came down to it


Why?


Because the surface area of lodash is so large and I need just a small fraction of it. And when I do need it it's easy to make a simple transparent version of the needed until rather than invoke a mystical black box. I admit this is opinionated and I have managed to work with lodash and keep my mouth shut about it


Thanks for sharing - I disagree with the approach, but understand it.

Or rather, I feel that can work for individual coding, or possibly very small teams - but in the end I feel it usually works out to be unintentionally anti-team behavior.

It trades off "I need to learn and understand this black box of behavior for instead I have this code that I know and can easily reason about". While for the entire rest of your team it went from "there is this common pattern that most of all 20 (or however many) of us know - and instead makes it 19 of us now need to learn this non-standard behavior/implementation, but at least one person knows well."

Essentially everyone else has to learn more code, so that one person doesn't. Agreed it's usually not the most complex of code, but it's usually buggier and shifts to cost / burden to everyone else. The opposite of economies of scale and leveraging prior fixed costs of learning.

But again, I don't know your situation and will readily admit my opinion above doesn't in anyway make it factually a better approach. Just my opinion on it same as yours.


It works perfectly fine. You just copy and paste the code you need. If someone else needs it, they search the code base first, which is normal in large code bases. There are tons of different libraries that provide deep equality checks for instance. When I work in a code base, I need to search for existing solutions in the code base before adding someone new. It doesn’t matter if the functionality is provided by a first or third part library.


I'm not sure if you're aware but lodash has individual packages for each of their functions.


I searched my monorepo and could find only two lodash functions installed: lodash.startcase and lodash.unescape.

It's nice to be able to install individual functions like these, which I don't prefer to write myself.


You can import individual lodash functions. I use lodash but there’s only 5 individual lodash functions in my dependencies.


Not super relevant for lodash in particular, but another reason for why someone might want to do this is dependency hell. At least with code you write yourself, you know that the author(s) won't keep changing the API (very common in the javascript ecosystem, unfortunately), forcing you to either waste time constantly changing your own code to match the newer APIs, or stick to a single version for a long time, which may become incompatible with newer versions of other dependencies, your build system, your language, your environment, etc, which will become an issue if you have something else that you want to update but that old version of the dependency doesn't support it. So in some cases it might be easier and more maintainable to just write your own version of whatever small feature of the dependency you might otherwise have used, but knowing when this will be the case is a skill of its own.


Less dependencies makes maintaining the project easier.


Popular libraries have been battle tested in prod, and my teammates probably have prior experience with them.


There are more than a few popular js libs that are pretty trash IMO. Though I would not count lodash among them. Battle tested can often just mean teams use them for better or worse, and I can often see in the code how bad frameworks cause juniors to contort their solutions in all the wrong ways to make them adhere to a framework instead of untie the knot and do it the right way. Sometimes it is the framework being used incorrectly. The devs should invest time to learn the API better. But often it's better to just learn how to do the solution sans framework rather than add to the stack of bespoke APIs to learn. And junior devs can become dependent on frameworks for every little thing to the point where they will always reach for one even if the choices are poor


How? That’s more internal code with bugs.


Some internal code with bugs is way better than lots of external code with other people's bugs.


Pretty much disagree on every front. Particularly for a major library.

There's a very good chance those "other people" have thought about the problem they're trying to solve way more than you have.


We are discussing lodash here. It's a javascript library with utility functions.

The link we are discussing states they have just deleted 400 bugs declaring "bankruptcy".

If you, like many people using lodash are using 10 util functions from it to iterate arrays or split strings, I'm sure you're better off just maintaining your own functions.

And I say this as a user of prototype.js, another utility library, 18 years old, in a very old project. That lib is dead and unmaintained, and it's going to cost me a long time to remove. So now I've inherited all their bugs and code, and it's mine, just like if you use the old lodash, the "bankruptcy" bugs are now yours.


You don’t think this learned helplessness is depressing?

Whether or not a team decides to own some portion of code is a complex topic. Sometimes it makes sense to leverage something external, and sometimes it makes sense to write it yourself.

In my business, we had a [ostensibly senior] guy insist we use an external technology because it would be too hard for us to own that piece ourselves, despite it being a core part of the product. Technical due diligence on the external technology showed that it didn’t actually work as advertised. Six figure price tag and apparently millions invested for a thing that doesn’t work (and “doesn’t work” in this case meant “corrupted financial data”).

Whether or not a team owns some technology is highly context dependent, and your comment comes across as reductive.


Maybe start explaining why not, and then ask why someone would? I think it would create a more interesting conversation.


Very much agreed. The amount of mileage we get from using Spread (literally the ...) alone has been amazing. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... Iteration helpers is shipping soon, that'll be a huge help (async iteration helpers will be delayed for a while). https://github.com/tc39/proposal-iterator-helpers .

In the olden days, I feel like the codebases I worked on needed to use .apply() multiple times a week, to figure out some creative way of invoking functions. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... That's all gone now; I'd take even odds that 50% of my team knows .call and .apply.

Chrome 117 is shipping Object.groupBy() and that's gonna be a huge help in eliminating a lot of the last places we end up using lodash. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


I haven't used lodash or underscore.js since 8 years ago. I just don't see what it has that can't be accomplished easily with map / filter / find, etc.


I wish some of you-might-not-need-lodash websites would include the approximate file size savings with each featured function. Some of them can be 5kb for a single function.


Issue trackers have two overlaid uses. First, they’re a way for maintainers to keep track of work they have to do, and second, they’re a way for the larger community (including users) to keep track of defects in the software. Declaring “issue bankruptcy” makes sense for the first use, but for the second, it’s erasing valuable information about the issues existing in the current version.


I wonder if for some very popular open source libraries, charging a small fee to report an issue might be some kind of viable business model, or at least a way to prevent too many issues. You might say this will result in people not reporting legitimate issues, but as we can see in this case it’s not the end of the world.


I think it would be cool if any user could add bounties to any issues and the maintainers could sort by highest bounty and then complete those first.


Relevant recent tweet from lodash's creator: https://x.com/jdalton/status/1701015897702572335?t=YLClO1zWa...


What is issue bankruptcy? Is it declaring that you don’t have the resources look at the issues?

It seems much better to tag them like Lodash has done rather than let open issues pile up like many other low- or no-resource projects do.


In IRL bankruptcy you declare you can't possibly pay back your creditors and lose "most" non-trivial assets as they're liquidated off to attempt to make a few pennies for those creditors as you rebuild your life from 0.

Lodash just decided to rebuild from 0 - literally, actually, since it's a rewrite, but especially in the bug tracker sense of the word.


Have you heard of email bankruptcy? When you empty your inbox unread and start over again.


There’s more discussion in this issue:

https://github.com/lodash/lodash/issues/5719

Frankly I commend the author. If you’re maintaining one of the most used open source packages, I think it must be overwhelming to get so much feedback. Realistically a team of 4 people probably working full time could manage a package like that and it seems like it’s just one guys side project.


A move somehow reminiscent of GNOME 2 move that prompted jwz to coin the “CADT” term.

His essay seems to never get old.


The problem with this is that there are search functions for issues which default to a filter of open. So when I have an odd behavior and search for clues in the issue DB I would like to also see older not fixed and never will be fixe issues.


I love this, and wish other projects would follow suit and not be afraid to say "no" to people asking for changes/fixes.

The obligations that the industry places on open source project maintained by unpaid volunteers are unfair and unrealistic.

If someone actually cares about specific changes/PRs they should fork the library and implement or apply those appropriate patches/changes

A "no" is IMO infinitely more useful than an indefinite "maybe" (what most projects do, because they are afraid to say no to anyone) when it comes to these things.


I hadn’t heard of lodash. From Wikipedia:

> Lodash is a JavaScript library that helps programmers write more concise and maintainable JavaScript.

I love the sweet, sweet irony! (“Blind leading the blind”, and all that.)


There was a time when it (or underscore.js, its predecessor) was a useful collection of Array methods, but since then almost everything of value in underscore has been added to the Array prototype itself, so I don’t really see a point in using it anymore. The devs know this too, so now they’re just bored and rewriting for no particular reason.


Could do what the FastAPI project decided to do - close all issues and open them as "Discussions". Low open issue count and hard to have a sense on what is/not working.


I think everyone in the comments here is missing the point.

Lodash is a dead project and has not done a release for 3 years.

All the PRs and issues were for a version of the codebase that has not existed for a long long time. There is no working version of the codebase anymore.

I will be surprised if lodash ever comes back in any meaningful way.


I wonder if this would be a thing if everyone who created an issue had to reproduce it via a test.


While I don't think open issue are a problem, I very much prefer such events of bankruptcy to a slow death by a stalebot which happens quite often when the maintainers start to lack time or motivation.


Everyone gets a feature!


select all open issues,

Tag: won't fix, close.

Restart cycle.


What steps are being taken to avoid a Perl 6/Rakudo-style misstep?


I don't understand why not all projects don't do this as regular maintenance. Old bug reports are almost always deadweight. Perhaps keep them in an archive if you must.


Why don't they just use the maintainer-never-looks-at-bugs + autoclose-for-inactivity-30-days like all the civilized repos?


Let the one who throws the first stone at the “closed / won’t fix” manager, themselves be the person that has to fix it.


zero interest rate phenomenon


Is this a meme post or is there a real connection here?


continued fallout, the stress and lack of tolerance for continued support is likely related to the more difficult macroeconomic environment

all we have are correlations though


I never liked lodash, even when when it was new. These days I just laugh when people tell how great it is.

Useful? Sure. But if you care about performance and simplicity, just spend a day or two writing your own versions of the things that you need. Or, if rolling home made is not to your liking, go with ramda.

Edit: downvote vall you want but at least have a look at lodash internals just in case. What an overcomplicated mess. It's old, really old.


Are there functions where the performance is significantly worse than what you could write on your own?

Not a rhetorical question. I don’t use lodash but as a primarily non-JS dev I’ve referred to the source some times when I’m writing JS as a reference for the idiomatic way to do simple things.

I’ve found some weird things, like isEmpty is technically O(n), but this turns out to be a limitation of JavaScript rather than a lodash issue.


I don't think isEmpty is O(n) except in the case where the object is a prototype[1] - I assume that's one of those weird JS edge cases - otherwise it does what you expect, which is to iterate with a for-in loop and return on the first iteration, so it is O(1).

[1]: https://github.com/lodash/lodash/blob/4.17.15/lodash.js#L114...


Surprisingly, even if you return at the first iteration, it’s O(n) on V8 just to construct the iterator.


There are a couple of weird things in my experience, like the old implementation of zip, but by and large it was mostly either restrictions of js itself or browser implementations of js.

Lodash has and still is a good standard / reference for these simple operations imo.


Looking at the internals is often why I choose to use it. It catches edge cases I might not think of, and the performance tradeoffs have been negligible to none.

I don't use it for everything, and definitely try to use native and/or narrower scope methods when it makes sense, but it's saved me a lot of time and headache throughout the years.


> at least have a look at lodash internals just in case.

I think that's the issue. The author seems to agree with you, and is rewriting it.

I don't use JS or Lodash, so I have no feedback on the particulars of this instance.

However, I am in the final phase of a "Declare bankruptcy, hit reset" rewrite of the app I'm developing, and I'm really, really glad that I did it.

I did encounter many of the same issues, as I proceeded on the rewrite, but I was prepared, and I feel I handled them far more elegantly than the original.

But that was a luxury that I had, as we are not doing the MVP thing. If you are dealing with a shipped app, it's quite a challenge, to do the bankruptcy thing.


He’s rewriting from scratch in typescript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: