Hacker News new | past | comments | ask | show | jobs | submit login
Why do we need modules at all? (2011) (erlang.org)
151 points by tosh 52 days ago | hide | past | web | favorite | 102 comments



Whatever you think of the idea, I think the reasoning here is interesting and instructive.

1. Start by questioning a long held assumption ("Do we need modules? Why? Is it because we stored functions in files?" Etc...)

2. Consider alternatives / what ifs. ("What if we used a database instead of files?")

3. Think through implications of the alternatives ("How would we identify functions?")

IMHO, this is a powerful technique for generating new ideas. Step 4, not explicitly performed here, might be to take ideas generated from steps 2 & 3 and apply them to other domains. ("Ok, perhaps a database of functions wouldn't be great for code, but perhaps it (or something similar) would be great for some other domain...")


Joe was always good at thinking differently. Talking to him was such a treat; that combination of irreverence and experience meant a conversation was always educational.


http://blog.datomic.com/2012/10/codeq.html

I think codeq 2 is being worked on


and voila, serverless is born


Serverless is about running functions this is about distributing their code. I agree that there is some overlap but different problems are being solved.


This is one of those ideas that sounds very tempting at the small to medium scale but will definitely fall over at the large scale. Zooko's Triangle is one obvious problem as the namespace gets full.

It also looks like this is only suitable for functional languages, and possibly just non-OO languages?

What about the type signatures of these functions? What do they look like? How well can different types be substituted? Do we have to version the types as well?

How do we handle testing in this scenario? Every version of every function? How about combinations of functions?

How about deprecation? This is hard enough to do with standard library functions; how long has gets() been deprecated, for example?


Consider that if you control a large codebase, the need for "decentralisation" (as in Zooko's triangle) is not there. You can just rename things.

When integrating across maintenance boundaries, renaming isn't possible but a simple prefix is almost as good as a namespace.


At the point where the article talks about ""the open source Key-Value database of all functions"", you can't rename things.


The name shouldn't matter. What I mean is, we should only care about the output produced by a given input. The keys could simply be unique hashes but we also associate a full description of the interface, tests, and text describing how it should be used. We would just need tooling that makes it easy to search, run the tests with our data, and locally alias the function to something we understand.


You are right, I'm off on a tangent here.


What if the key is module.fn


On the topic of "where do I put this function", for "library/utility" functions, I really like the Hack Standard Library approach: Organize namespaces by the types of return values (or arguments). So we have Vec/Keyset/Dict namespaces for all library functions which return vec/keyset/dict, and the C (Collection) namespace for functions which take any collection and return something else (either element or a number). I'm sure this can be extended, and I'm always toying with the idea of replacing lodash with a library that follows this convention. Here the objective is really discoverability - you usually know the types of return value/argument, while you often don't remember the function name.


Also maybe mark arguments which are essential or auxiliary (and defaults), like functions receiving Lists of Foos are not really List-related, but Foo-related.

I always wanted a code completion tool which I could ask: hey, I have these two values, say, one field and one pango layout, but forgot what to do next. And it would figure out that I want

  pango_layout_set_text(
    layout,
    gtk_entry_get_text(entry),
    -1);
among the others. Soo cool.


I really like the idea of a no module system.

If you look at the /usr/bin directory of a Linux system there aren't any namespaces. There's 1,000+ binaries thrown into 1 folder, each with a distinct name and things work nicely.

Imagine if you tried to group those binaries into 1 or more folder driven namespaces. Where would things go? The amount of bike shedding would be endless and I don't think it would be a better solution in the end even if you managed to pull it off.

All of Joe's points in that post are spot on.


>If you look at the /usr/bin directory of a Linux system there aren't any namespaces. There's 1,000+ binaries thrown into 1 folder, each with a distinct name and things work nicely.

That's the last example of a nice design I would bring, and the worst possibly argument in favor of no modules/namespacing.

A flat /usr/bin/... with 1000+ binaries in UNIX is a bad solution we ended up with because back in the day where UNIX was created there were just a few dozens of binaries, so it was "good enough".


...back in the day where UNIX was created there were just a few dozens of binaries, so it was "good enough".

What's actually wrong with it though? Why isn't it still "good enough"? It's hard to read the list, but Linux has plenty of tools to fix that particular problem. I don't really see the issue.


It's all a big dump, discoverability is bad.

Name clauses.

Version clauses. "All binaries inside a folder" needs renames to allow for a different version (even if it's a different minor version).

It's just the binaries, which was OK back in the "program = single binary", before "man" even, days, but now it splits the program from different assets it uses (man pages, default configuration, image assets, etc). This in turn makes deleting/moving more difficult, and adds all kinds of baggage to package managers.

>It's hard to read the list, but Linux has plenty of tools to fix that particular problem.

That's a description of the problem to me. Needing "plenty of tools" to fix an initial bad commitment.


> It's all a big dump, discoverability is bad.

I see it as easier to discover.

Imagine if you had to goto /usr/bin/output/user to find the whoami binary.

If you did, who is responsible for making those directories and what happens when you have a binary that could really belong in 5 different places?


This is one of the many problems Guix solves on a slightly lower level than most package managers, because binaries live inside Guix packages (/usr/bin is simply populated by absolute symlinks). Of course package managers also do this, but Guix kind of lets you symlink to a (unique) package, rather than `install`.


For stuff I compile from scratch I just use a symlink-based package manager I found online:

https://zolk3ri.name/cgit/zpkg/

It seems very minimal but supports my requirements. It makes sense to me, because everything gets installed (usually via make install) to ~/.local/pkg/foo-1.0/ and the like, and then symlinks are created to, say, ~/.local/{bin,etc,include,lib,man,share,var}. According to the source code, it is configurable via environment variables. It seems to share similarities with GNU's stow.


For one I think the example doesn't really fit, as Linux binaries are much closer to modules than they are to plain functions, as each of those binaries has numerous command line flags to change it's behavior quite radically. Functions as you encounter in a programming language are much simpler and more focused.

The other issue is that the approach is extremely inflexible, if you want to replace /usr/bin/program with a new version, everybody on the system is forced to use the new version, even if the new version might break stuff. Traditional `/usr/bin` doesn't provide a solution for this. You can of course work around that by using `/opt` or installing things in your home directory, but at that point you are just using the file system as namespace.


very good point.

i think the solution to this problem (module or not) will be the same solution to the shared lib problem.

npm is a dumpster fire, but the larger projects are usually fine (the /bin equivalent). the worst problem is that every large project pull in different versions of whatever libs they wanted to use (which the singleton util lib proposed in the article would solve)


> What's actually wrong with it though? Why isn't it still "good enough"?

Anyone who has had to try to manage security permissions on a big flat list of files knows it's like holding jello with rubber bands.

As an another example people might be familiar with: having to manage a big PHP or Ruby codebase where everything is separated by the models, views, and controllers a hierarchical level _above_ the business function. It's a nightmare. Trying to pull out the useful business functionality that's interleaved everywhere makes it very hard to pull codebases apart to where different teams can work on them.

With respect to Joe's comment here:

> Bad: It's very difficult to decide which module to put an individual function in. Break encapsulation (see later)

As much as I like Joe's thought process, that "difficult" part is otherwise known as "the real work". All systems which scale in complexity--as opposed to the simple problem scaled in performance--need strong boundaries. They also all have a hierarchical component to them.


This was the hardest thing to get used to in linux coming from Windows. In Windows an installed program lives in Program Files and has a shortcut that starts it from it's home folder. That's very different from a namespace collision possible global install that can be run from any folder.

I still don't fully agree with it but it makes some things easier. The Windows way also made multiple version installs possible.


> In Windows an installed program lives in Program Files and has a shortcut that starts it from it's home folder.

And command line tools have that folder added to the PATH, and Windows has a real and ongoing problem with PATH getting too long for the OS to handle when you have enough of them.

Windows is GUI optimized with shell as an afterthought, Linux is vice versa.


FWIW you get the same on Linux by installing stuff in /usr/whatever/wherever, /usr/local/whatever/wherever or /opt (which is the closest to Program Files-like place) and making symlinks to the "global" binaries (those meant to be accessible via the shell) while keeping everything else in the application-specific folder.

It isn't very uncommon to see binaries inside /usr/bin being symlinks to binaries in other directories, even if you stick with distribution-only packages.


You can have multiple versions of a program installed in linux. Most package managers won't help you very much with that, but it's about as easy as windows if the programs are interpreted or statically linked or compile easily with your system libraries.

NixOS and distri are notable package managers that do make installing different versions of packages pretty easy.


But a unix system has a module system: the filesystem and its folders. The fact that there are a number of "functions" that are in the same "default namespace" is just a convenience: if needed two binaries with the same name can coexist.

Also i will add that in fact the systen binaries are already split: at minimum /usr/bin and /usr/sbin (but also historically, /bin and /sbin although i think in recent linux system the last two are just symlinks to the formers, or the other way around)


quick, I have a linux box here that I use for some scheme and some CLR development, what does `csi` run ? chicken-scheme's interpreter or microsoft's C# repl ?


Run `csi --help` and find out?


And hope it won't delete your working directory.

Seriously though, considering the CLI software I use occasionally on my Linux machine, for any given program one - and only one - of the following will tell you its version: -v, -version, --version, version. I never can guess in advance which one is the right to use.


There are namespaces, you just don't see them. That's why there's python and python3, cmake and cmake2 (IIRC) etc.

With functions it's worse. In a flat namespace you will inevitably run into a problem of two people coming up with the same name.


> There are namespaces, you just don't see them. That's why there's python and python3, cmake and cmake2 (IIRC) etc.

IMO python and python3 is a very welcome feature not a negative side effect of no namespaces. At call time it's super explicit so you know exactly what version you're using.

But in most module driven programming languages you might import the foo function at the top of your file and then use foo in 5 different places within that file. However, you spend 99% of your time working with the code in the file not glancing at imports, so now you're left wondering not only where foo is coming from, but who provided foo. Is it from the standard library, your own code base or a third party author? Suddenly you need to keep all of this in your head and it sucks.

Phoenix (a web framework for Elixir) has been taking steps to remove a lot of loose function imports so you know what module they are coming from at call time, but that's because Elixir as a language has modules. But even still, explicitness where you're using it is so much better in the long run for maintenance, even if it involves typing a few more characters.


As way-too-often, I think the solution should be a good IDE. There is nothing stopping your IDE from completing "foo" as "module1.foo" and offering "module2.foo" as well if it exists. Alternatively the IDE could just display "module1.foo" instead of foo because that is what the import dictates.

As always with dynamic things and editing code for them, this will be more difficult to get right when imports are dynamic.


> There are namespaces, you just don't see them. That's why there's python and python3, cmake and cmake2 (IIRC) etc.

That's a versioning issue, that would happen with namespaces too. You'll have to disambiguate somehow and if a version number is the natural way, then that's the natural way regardless of if a single name or namespaced.

Keeping the next incompatible version as a separate thing (eg python vs python3, instead of continuing python but with semvers saying that 3.0 breaks 2.x compatibility) is actually a good idea. It leads to fragmentation, sure, but it makes dealing with the different versions easier IMHO and incompatible versions may as well be totally different things anyway.

> With functions it's worse. In a flat namespace you will inevitably run into a problem of two people coming up with the same name.

Yes, completely agree!


Just spit-balling here... I think a proposal like this can benefit from making some assumptions about tooling, the database of functions and their metadata would logically be tightly integrated into your editor. The official name of the function could simply be a hash of its contents, the serialized ast then hashed could be another option for matching exactly-syntactically equivalent code. Then human friendly names could be automatically searchable and tacked on in the code via tooling which presents the metadata in-line but does not become part of the source.


You could use something like uuids for naming functions and attach meta data (including e.g. human readable names and/or namespaces).


> uuids for naming functions

Microsoft COM would like a word. {20D04FE0-3AEA-1069-A2D8-08002B30309D}


People on HN keep reinventing COM. I swear every other week we have discussions like the above in at least one thread.


Yup. There's a lot of factors at play. We're a very ahistorical field; lots of folks are self-taught and therefore reinventing everything is normal to them; the dev universe is heavily siloed into "Web+UNIX" versus "Microsoft" versus some now moribund siloes (Websphere! Solaris! Alpha!); Microsoft have almost no community compared to the FLOSS world (although .net is changing this a bit); and finally the simple fact that COM and you and I are very, very old.

I'm not even sure that reinventing COM would be a bad idea - you can definitely do some great things with it, and the specific implementation details of COM are crufty. But doing so deliberately rather than accidentally would be better, with a review of benefits and problems of COM approaches.


COM has been reinvented, as UWP. :)

As for the community, maybe not on SV, but there are plenty of Microsoft communities and meeting groups.


Tell them it's called 'Core Foundation Plugin' and point at the apple docs, they'll start loving it.

https://developer.apple.com/documentation/corefoundation/cfp...


It's "Component Object Model" for anyone out there that almost gave up after googling "COM".


> and things work nicely.

no they don't


I’m a big fan of Joe...but I also remember PHP before it had namespaces...and it was a train wreck. IMO all of the improvements in the PHP world have happened because they added them.

I can see it for core libraries, but outside of that I think namespaces are necessary for the greater language ecosystem.


I was there in the PHP 4.x days too during the early 2000s, and you know, I really liked it. It just made things so simple to understand and work with. Look up a function, see a bunch of examples in the docs and use it right away without having to worry about imports and lugging around a ton of baggage.

I think the main reason PHP got such a bad rap for flat functions is because they implemented it poorly. So many things were extremely inconsistent when it came to naming conventions and argument orders that it became near impossible to remember them in a systematic way. But couldn't that problem be addressed today since we're aware that naming consistency is an important programming language feature?

> I can see it for core libraries, but outside of that I think namespaces are necessary for the greater language ecosystem.

Yeah I could get on board with that. Docker sort of does this now with Docker images. At the UI level (the tooling we use in our day to day like the CLI) we don't need to access "elixir" or "python" images with a namespace (Docker adds it for us automatically), but we are free to create our own brightball/elixir or nickjj/elixir images to avoid conflicts.

But in a programming world, I would like to type brightball_elixir at call time in my code instead of importing elixir from brightball, because with the latter I have no idea if elixir is coming from you, me, or the standard library unless I move away from the code and look at the import.


I'm mainly talking about the framework world at that time. They all reinvented EVERYTHING because they couldn't just easily include a shared database library, templating library, etc. They all reproduced their own with their own names to get around it.

That type of stuff. The core language was about the same regardless, but the ecosystem improved dramatically.


Ah ok. When it came to using PHP back then I didn't even use third party frameworks. I just cherry picked things like database libraries and pulled them into my code base.


you should re-read the article. the main point is that it will solve frameworks (not exactly what the author aimed at, this I am extrapolating) reinventing everything. all utils methods will live in the proposed repository in the sky with all methods. frameworks will just contain their higher level logic, until that becomes common enough to use standard types in the util repository in the sky.


> I’m a big fan of Joe...but I also remember PHP before it had namespaces.

To be picky: module and namespace are different concepts.

C++ has namespace and (soon) will have namespaces AND modules.

Namespaces can be seen in C++ as a pure hardcoded prefix for your functions and are a perfectly valid concept with Joe's idea of a "giant KV store for functions".


Well, with 1000 names you still don't get (many) name collisions so no modules/namespaces is fine. At some point you will find yourself introducing smurf namespacing and then I would consider modules/namespaces the better solution.


If you think about it, smurf namespacing is much simpler thus more elegant than a "real" namespace system.


In one sense, yes, but in another no. The main problem is that one can get real long names.


Depending on your philosophy, that may be a feature to you. I tend to like this effect, because it forces the programmer to avoid building deeply nested structures, which is too easy to do when you abuse modules/namespaces.


That is a valid point. There is a tendency for things to go the java way.


Because long names in Java meant avoiding deep nesting?


When people criticize NPM for having millions of downloads for single-function packages, this is exactly the letter I always think of.

When Joe Armstrong proposed this, I thought, "Hm, that's an interesting idea." But I think NPM has shown that it isn't such a great idea in practice.


In Joe Armstrong's concept, the standard library still exists, it's just organized differently (i.e. while it's possible to switch out parts with high granularity, but not the default us case). NPM's stance is "there is no standard library, just use something and hope for the best".

I.e. the release would still contain a curated, officially supported set of function that a) work together and b) are reasonably sure to not execute miners, not display ads and not post your environment variables to pastebin.


Assuming "package" is not just another name for "module" and that it goes away once "installed" (i do not know about NPM), what would be the difference between 1000 single function packages and a single 1000 function package?


1000x as much metadata, and with small packages, you get more metadata than real code.


For what it’s worth, many of us are quite happy writing and using small NPM packages with clear responsibilities.

Maybe it’s self evident to you that this is a horrible thing, but it’s not to me.

But if you care to share, I would be happy to learn about the horrible things that will befall me, before they happen!


I think this idea makes a lot of sense.

I recently published a tiny library on NPM for changing the case of strings (snake_case, etc.). It felt so small and silly that I was tempted to add some other string manipulation functions, to make it a more "serious" library. But it struck me that this would just be silly. How often do you need a whole collection of string manipulation functions? Never! It would just add to the size of dist folders everywhere, with no gain whatsoever. I suspect this applies in many situations.


" How often do you need a whole collection of string manipulation functions? Never! "

Always! The Javascript world lacks a good standard library or something like STL that covers most of what you need during normal development. Instead you have thousands of little packages of varying quality. I would much prefer a few big libraries with more power and consistency.


There's no way I'd ever import a tiny library like that. Verifying that a library does what I expect it to do, uses an appropriate license for my project, isn't likely to break my project in the future, etc., is a fair bit of work, and it only increases as more tiny packages are added. The worrying over increasing the size of the dist folder is a symptom of using a bad package manager. I do believe that packages shouldn't be bloated, and should mainly focus on a single goal, but that goal should be at least somewhat broad in scope. Manipulating strings is a good objective for a package, but changing the case of strings seems too small to be worth it.


> How often do you need a whole collection of string manipulation functions? Never! It would just add to the size of dist folders everywhere, with no gain whatsoever.

I mean that's where proper tree shaking would void this argument. When you include lodash (or lodash-es more precisely) for instance only the used functions are in the final bundle with a modern bundler.


> How often do you need a whole collection of string manipulation functions?

All the time. It’s called a standard library and most sane languages have one.


> How often do you need a whole collection of string manipulation functions? Never!

Not initially, but as your code grows, you start to call out to more and more of those string manipulation functions... and if each of them is its own separate package, you get an inconsistent mess at the interface level - just like PHP used to be, except PHP didn't consist of individually downloadable functions with much more metadata than actual code.



The proposed key-value function database mentioned in the article sounds like the recently launched Function Repository in the Wolfram Language...

https://resources.wolframcloud.com/FunctionRepository/


What about module-level programming? Has the author ever heard of functors? How about information hiding and coupling data structures with their functions? I need just a glance to the ML world to find lots of valid use casea for modules, way more than just "provide a compilation unit structure".


He wrote this as a rumination to an Erlang audience about the Erlang language. It wasn’t really necessary to explore every angle about other languages’ use of modules.

He does discuss information hiding in an Erlang context, so he didn’t completely ignore your concern.

I’m not sure why you felt the need to (tonally) attack him.


I'm sure Joe Armstrong is familiar with functors.


Then why is he completely ignoring this topic in his argument?


The whole thread has interesting discussions. Keep in mind it's 8 years old: http://erlang.org/pipermail/erlang-questions/2011-May/thread...


I love lexically scoped modules. Its like global functions, but without the complexity. Node.js got this right, where modules are like an extension of the standard lib (node.js framework) too bad its too slow. But in compiled languages like for example D there is no performance penalty! Some programmers make fun of small pure functions/modules ... Then they cut their own code up into tiny small pieces that all depend on each other, without any reusability.


I think the Unison language goes exactly in that direction

http://unisonweb.org/


> all functions have unique distinct names

> To avoid all mental anguish when I need a small function that should be somewhere else and isn't I stick it in a module elib1_misc.erl.

I can totally relate. It doesn't reduce collisions significantly if a function is named Foo::Bar() instead of Foo_Bar(). But the latter has the advantage that we are free to move the function (avoiding initial decision paralysis, and later it's easier to clean up) and being required to reference this function with fully qualified name leads to much more readable code (IMO).

What I've been doing lately is I just make descriptive function names. I don't even use a prefix, but may still include type names in the function name. Example, slightly long name: copy_from_StreamsChain_to_Textnode(). That's totally fine in terms of avoiding (the FUD concerning) collisions and it flows very naturally.

Just like with Objects/methods, not doing namespaces leads to a significant reduction in decision paralysis (time consuming taxonomic concerns - "where should I put...?") for me.


The converse question to "where should I put" is "where could I find". Namespaces and objects make it easier to find related functionality so you don't have to reinvent it. Without good techniques for locating relevant useful functions, you're just writing write-only code. And the technique of "if I was going to write this, where would I put it" doesn't even work when I'm trying to find my keys, let alone among all the developers of the world.


That's an interesting thought. But consider

- If "where should I put" is unclear, "where should I find" is equally unclear.

- Code should be mostly "local" anyway (what's the word again?)

- Text search exists, and IDEs nowadays even do fuzzy search.

- I didn't say we should make a total mess. If there is an obvious organizational strategy, just go for it! But namespaces are not that important; you can still get most of the organizational advantages of namespaces by just grouping in files and maybe prefixing.


“Where could I find” is indeed an important question, but it is not obvious to me that hierarchical namespaces are the best solution. Couldn’t something like a list of tags attached to every function/type/whatever be at least as effective?


I think the end result of something like this is call-by-meaning. Alan Kay and co. Have published several papers on this subject, but in a nutshell the code looks for a matching function to a signature and return type, then calls it.

Obv. There are issues with this approach if fully automatic (how many sum functions take two ints and return int?) but there's no reason why we can't use a sig/return to filter on ie. The same function can be in 2 modules if filtering by sig or by return type.

With typing similar to Haskell we can be even more strict - the sig/return is a contract, and we can have multiple types like userInputInt, randomInt which can differentiate similar inputs.

With good typing and behavioural provability we may be able to move towards a call-by-meaning system


An interesting thing about XML namespaces is that they are basically a way to autonomously create globally unique names that are still human-readable and writable. (As opposed to GUIDs, for example.) Like `foo:bar`, where `foo` is a local alias for a long and unique string, often an URN. (Of course, this wasn't an original idea of XML.)


Table-Oriented-Programming (TOP) philosophy is that while it's good to apply categories to snippets of code, one shouldn't be forced to have one and only one category per snippet. The hierarchical and file-based view of grouping or categorizing code is seen as obsolete under TOP's way of looking at things.

We use RDBMS to manage large quantities of physical objects, so why not also use them to manage large quantities of code snippets, such as event handlers. Early databases were hierarchical also, but over time hierarchies ran out of steam. TOP believes eventually the same lesson will be learned about code management.

Take MVC for example. Some code maintenance tasks are by entity and not just M or V or C. For instance, adding a new column to a given table. So if one could run a little query to list links to all modules/files/snippets for a given entity, then entity-based tasks are easier.


Hello Julia - and it's unreasonably effective multiple dispatch system.

https://www.youtube.com/watch?v=kc9HwsxE1OY

f(x1,x2){} < x1.f(x2){x1} < f(x1,x2){x1,x2}

But modules are useful for organising code and choosing what to bother the compiler with.


Nope. Not convinced this is an idea, or even a good thought provoker.

Organization of code at a larger granularity than individual functions, is something that's really useful and important in most non-trivial applications.

Topicality and meaningfulness of organization is something that should be valued. Removing or skipping it because Joe can't be bothered separating his String, Text File and Process funcs would be foolishness. Resulting in only a new form of spaghetti.


“Global key-value store of functions with their metadata” sounds a lot like Hoogle to me. As for getting rid of modules and replacing them with namespaces... I’m not sure I see what is gained. You still have function names plus an additional identifier.


We already have this in JavaScript: isArray, is-even, left-pad, etc.

And the (searchable) database is called npm.


So the "database" is the new module? So an entity can share its whole database or nothing? Everything smaller than the whole database seems a module to me.


How about a global K/V store for data structures?


Anyone have a copy of his elib1_misc.erl?


Isn't a major major point that package registries and or modules don't need review board? thus eliminating gatekeeping?


The gatekeeping can be a feature; centralising security review is definitely worthwhile.


I read the review board as optional extension to the idea.

The database / infrastructure could be permissionless and you could still define curated subsets of functions in addition to that.


I feel the same way about scope


explain?


Why do we need anything at all?


Please stop the insanity called software “engineering”


That's a great idea! Let's try and deploy this 'left-pad' function that everyone can use...

But seriously, I know his description is a bit different, but I think NPM has shown that such a model is really difficult to maintain.


Functions in a global persistent database would solve the "dependency suddenly disappearing" problem.


“Global persistence” is a lie. The second copyright law or malware become involved, practice separates from theory. Look at the hackage rollback, for instance. (https://hackage.haskell.org/package/Facebook-Password-Hacker... <- downloads what used to be a Hackage package to your computer. Now, its a text file with the word "Gone")


bittorrent, blockchains


I don't think it's just the persistence that's the problem. NPM is now persistent (I think). But it's the fact that you can upgrade the functions. So, maybe if the references were immutable, it'd make more sense. But then it's the fact that you've got a bunch of one-off functions which are a security nightmare to audit, not to mention accidentally slurping in several MB of functions due to the dependency graph.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: