I just built a small web application framework that I can build some basic personal apps on top of. Just like the author's, it's "shitty". It lacks features, doesn't handle edge cases, isn't particularly secure or performant.
None of that matters. It's locked up behind HTTP authentication and it's for my use only. It has exactly the features I desire, and none that I don't. I understand it top to bottom. It was fun to write and extend. It serves my needs perfectly. I didn't have to fight with, or conform to someone else's notion of how software should be built.
You go into any experienced woodworker’s shop, you can divide it into maybe 20% of the tools that see the vast majority of the action—table saws, band saws, milling machines, proper hammers. But then there is a halo of 60% or more of the things there are little helpful things—doodads, mallets, jigs, often even shelving and work surfaces—such that she uses a bunch of these on any build she undertakes, and she made them herself. “Why would I stumble around with someone else's mallet when it’s about the same cost and about half a day’s work to machine my own mallet down in a router that has exactly the weight and shape that I like it to have? Pays for itself.”
We are so far from this ideal. All of our software breaks because we imported left-pad and somebody unpublished left-pad. Wasn't even a world changing perfectly V8-on-x64-tuned performant version of left-pad, but we gotta “not reinvent the wheel.”
I don't think we do it for no reason, there is something about software itself right now that is amateurish. We’re still writing subroutines working on data structures; our brains are FORTRAN even if we call it Java, and whatever that is makes it really hard to decouple the tools that we are using to build software from the final outcome that has been built. The woodworker has a sense of the “shape” of the artifact, we routinely build the equivalent of gorgeous wooden tables that start spewing garbage onto their shiny surfaces after three weeks unattended, “oh, I didn’t realize that the built-in trash chutes could get clogged.” Wait—what? “Yeah, we allocate more wood at the bottom of the table to make up for the wood that we remove at the top of the table every night so that the surface is always clean.” Your user doesn't clean the table? “Hahaha you’re silly, users could not possibly have the technical knowledge to keep anything clean, they are not trained in sponges, and I especially could not trust the
with something as beautiful and delicate as this table.” The problem has a beautiful sort of fractal quality to it, every zoom-in is more WTF.
The problem is that many things a developer considers tools (as in: "the best tool for the job") are actually components.
The equivalent of a woodworker's tool for a software developer would be an editor. Only you or your team might care about which particular brand of sawing machine or editor you use. And they are not an integral part of the end result.
A component for a woodworker would be a hinge or drawer rails. You would probably not appreciate it if your woodworker uses hinges that they made themselves. They are not worth the time to build. They're probably not very reliable. And if they fail they might not be easily replaced or repaired except by the person that made them. So you would only use custom components if the part you need is otherwise unavailable or does not meet specific needs.
Are you a woodworker and making a cabinet for yourself? Go ahead and make your own hinges, but for paid projects you probably should rely on standard components as much as possible; only doing custom woodwork or software engineering where it adds value.
Hinges - like everything else on CarpentryHub - are either in a state of permanent development or abandoned. So you design your project for Hinge 13.2, but then you come back a month later - just before you open your Etsy store - and discover that Hinge x.n has now become HingeOS, which is breaking change.
It also has its own HingeAPI, which is poorly documented, somewhere.
You were used to Hinge's ScrewHole 5.3. But that's now buried inside HingeAPI, which calculates your project budget and lifetime carbon footprint for you, but no longer accepts external screwhole positions, because it works its own values from the complex project specification you have to supply - defined in DirkLang, which is a hot new carpentry specification language only ten people know, and replaces HammerLang, which was fifty five years old and very popular but had a lot of frankly questionable design choices.
And the values HingeAPI returns aren't mutable, because mutability is bad practice.
I too look forward to the 2021 YC batch
To go back to the static website, I really like the Wiki format (Current favorite is http://pmwiki.org ). I picked it for three reasons 1) It's a very powerful wiki 2) I can convert the site into static pages. This lets me build in the wiki format, then publish out. 3) There is a cookbook (a tool another user wrote) to make PDF. You can select the pages in page order and produce a PDF. To that base I've added my custom items.
Pmwiki in it's own right is simple (a router), but allows for cookbooks (custom extensions) that others can add. One of the interesting things is the current lead developer doesn't really add features to the core, they also create cookbooks to add on. And the core is pretty soft, if you don't like the markup, you can change it.
A long way to go to say that I can see writing software like the OP article. But if you should look at a core tool and see if you can build around that. @Molf uses hinges as an example. You can do a lot around a stock hinge (paint, bevel the edges, add "hand crafted tool marks", a process that is easier to do than starting from ground zero.
To absolutely torture the analogy: say you have a general hinge that you can attach to any door. And that's fantastic, but it's a lot more complex than it needs to be (it being generalised necessitates this) and it isn't actually completely optimal in most cases (it just works ok, ish). You can use Super Hinge on all your paid projects, and it will work, it's just that it's often better to make a hinge that exactly fits the required specifications.
There are for sure economic factors behind choices in how software is built, for sure.
Back to the analogy and actual tools -- what employer is going to pay you to write your own editor to use to write the software the employer wants? It's not just that they are mean and choose not to pay you for this, it is not a cost they could sustain (for all but the biggest employers I guess. Google, sure, can have an editor team. Still can't let every individual programmer build their own editor)
You can also think of lower-level components, like nails and glue, which even more general and standardized than something like a hinge, maybe these are like standard-library or builtins for a language?
You can reinvent the wheel as many times you want. If you however do that at work, your colleagues and your replacement after you left the job likely won't think too kindly of you.
> The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created. The name refers to Swedish manufacturer and furniture retailer IKEA, which sells many items of furniture that require assembly. A 2011 study found that subjects were willing to pay 63% more for furniture they had assembled themselves, than for equivalent pre-assembled items.
I think this effect also applies to developers. They tend to value self-written applications subconsciously higher than "pre-assembled" applications.
I don't think that this is the same about Ikea, though. If I bought Ikea table, I can't really adapt it, unless I'm advanced woodsman with proper tools. If I'm just ordinary guy with screwdriver, I can just follow the script to build that table.
I think that Ikea analogue is someone building his vim environment from other's plugins following guides. He does not know C, so he can't hack vim, he does not know vim configuration language, so he can't hack plugins.
Therefore, I think the IKEA effect holds true in software development, because you rely on "pre-assembled" items, but I agree that you have much larger degree or freedom on how you build your application in e.g. C compared to a IKEA build. Maybe, this higher degree of freedom even strengthens the effect.
- Frameworks and tools are almost always ill-fitting. You have many features you don't need, you lack many features you need. You have dependencies you don't want, code that you don't know, models that might not match your own (and might not be explained well enough, or might not even be consistent with themselves anyway).
- Frameworks and tools are often leaky. They don't solve problems perfectly anyway. They substitute the problem of having to develop a custom solution to the problem with having to figure out how to use the tool effectively, plus any problems derived of the shortcomings of the tool itself, which are often harder to solve on a tool you didn't develop yourself. The better you want to do something, the more disappointed you become with leaky solutions. And other people leaks are far more uncomfortable than your own leaks.
- Some people just want to have control and know what they are doing, even if they need to put much more work on it. Not productive, but more consistent.
- Writing your own tools deepens your understanding of the landscape. This is particularly significant in a world where we have hundreds of frameworks and tools, but very few coherent explanations of the context they work on and the problems they really try to solve. We have reached the point where if someone wants to make a non-trivial website, we tell them to learn React instead of sending them to learn how the internet and websites work so they can choose a tool for their needs afterwards. And this piles up and giving coherent explanations of any landscape becomes harder because no landscape is coherent anymore.
- Related to the previous, there's a certain trend that emerges where the more dependent we become on frameworks and tools, the more the problems in a given scenario are ignored or solved by adding more abstration layers and fancier features on tools, instead of working on the root underlying issues, and the problems become fuzzier. Working on the actual problems by yourself helps you see how much cruft and dirt there is behind even well established technologies... and some of us are masochists and want to know about all that sh*t, because if it remains hidden there's little chance to fix it.
How about compilers?
That's not what I'm trying to do (to compare them), but people who like to write things from scratch often blame frameworks e.g. they think they are leaky or don't solve problems perfectly anyway.
My question is why they think they can rely on interpreters or compilers, because they also have limitations, bugs, CVEs, leaky abstractions and other known problems. They also form an unknown variable, because most haven't read the source of the compiler or interpreter they use.
You shouldn't reinvent Django if other members of your team already know Django, as you're imposing a learning cost in doing so.
You rather obviously shouldn't reinvent the Linux kernel, as doing so is an enormous undertaking. It's probably a huge waste of resources, your solution will probably be far inferior, and you're likely to fail anyway.
You shouldn't reimplement TLS, as you will almost certainly do so incorrectly, and it would take a great deal of effort. It's a waste of resources and will worsen your cybersecurity risk profile.
Why should you not? You only learn by developing. Precautions follow with such, as likes not running in production. But how else do you learn if you do not?
If your interested in learning about secure sockets and the rest, you should reimplement TLS. Because that will teach you the inner workings of TLS.
The vibe "you shouldn't" crushes the development side of things.
> "Shouldn't" is the wrong word to use. Why should you not?
I'll assume that here you're not referring to reimplementing purely as a learning exercise.
I already said why: you will almost certainly do so incorrectly, and this will worsen your cybersecurity risk profile. For serious work, you should use a mature battle-tested implementation like everyone else. The slightest error in a cryptography codebase can lead to severe security vulnerabilities. This is well understood and much has been written on the topic of don't roll your own crypto, e.g. https://security.stackexchange.com/a/18198/
> The vibe "you shouldn't" crushes the development side of things.
No, it's solid cybersecurity advice. It's irresponsible to make use of either an amateur cryptographic scheme, or an amateur implementation of a cryptographic scheme, in a serious codebase. Reimplementing TLS purely as a learning exercise is of course still to be encouraged.
More generally, there is no culture of you shouldn't posing a problem in the software development world. Nuclear engineers and bridge engineers are taught you shouldn't... but software developers are taught just go for it.
If I create a mature, well thought piece of encryption. There is no irresponsibly in wanting to use it in production. Yes, I should have it validated and if that validation comes back as something with no thought, buggy and disastrous then that would be irresponsible of me. And one could argue it's irresponsible to run it without validation too. To which I would agree upon.
I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.
Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.
"You should create your own implementation of TLS but you should should not run it in production as that would be irresponsible until you have validation". Not just to blanket "you should t" end of. Above is what should be passed to developers. If you feel your work is good enough, then pay the price for validation.
Besides what makes an expert if you don't make it for yourself?
> What makes an expert if you don't make it for yourself?
You study the field, like any other field. This doesn't collide with what I've said.
In this context there needs to be a fairly bright line between learning, and producing real-world cryptographic systems. It might be instructive to have engineering students build an airbag system, but you don't then put it in your car.
> If I create a mature, well thought piece of encryption
Unless you're a professional cryptographer, someone like Bruce Schneier, Tanja Lange, or Filippo Valsorda, it's best to assume that you haven't created a well thought out cryptographic solution. See Schneier's Law.  If you have a PhD related to cryptography, and/or a history of employment as a cryptography specialist at a major technology company, then you may have a solid enough grasp of the field to be taken seriously, but short of that, you should leave cryptography to the experts.
It's really hard to get the theory just right, and it's also really hard to get the implementation just right. Fortunately there are existing out-of-the-box solutions that do all the things we want: secure channels, secure file encryption, authentication, etc.
> Yes, I should have it validated
We have a validation process: standards bodies. For instance, in the TLS 1.3 standard, they introduced the requirement for supporting the x25519 algorithm. That algorithm was developed by a team of professional cryptographers, not by a well-meaning dabbler, and it has been subject to careful scrutiny by the cryptographic community.
After standardisation, we see the algorithm implemented in the few trustworthy TLS libraries (e.g. OpenSSL and Google's Tink), which we then adopt for use in the real world.
Serious organisations do not play around with this stuff, they only use trusted standard algorithms. Microsoft/Apple/Amazon/Google have crypto teams who are qualified to write their own implementations of the standard algorithms. The rest of us then use those implementations. Microsoft's Active Directory is backed by the standard Kerberos crypto protocol, for instance.
> I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.
We agree in a sense, it's just that the bar for considering it battle proven is set very, very high, and for good reason. Developing crypto isn't like styling a webpage with CSS. It's technically challenging to do correctly, it's difficult to know if you've done it correctly, and the consequences of getting it wrong are severe.
> Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.
I don't know what 'validation' is meant to mean here. If we had a way to easily detect such issues, we would use it. Again, it's extremely difficult to get this stuff just right. The smallest defect can have terrible consequences.
This applies even when we're doing everything correctly. As you say, we see issues even in major implementations. That doesn't mean that using amateur crypto code is a good idea. It isn't. Every cryptographer agrees that it's a terrible idea to do that. Sometimes aeronautical engineers build dangerous aircraft, but that doesn't mean we let amateurs have a go.
> If you feel your work is good enough, then pay the price for validation.
That isn't how it works. Cryptography is an academic discipline, it makes advances through slow-moving academic publishing and standards bodies, not by paying for a code-review. If you really want to make a contribution to the field, you'll need to make a career of it.
Again though, in a sense there's little need. Much of the best crypto software in the world is Free and Open Source.
If you want to implement TLS as an exercise, or make a neat cryptographic 'toy' program of some sort, then great, but don't gamble anything of value on it (user data, say).
My mind is primarily focused on the practical viewpoint of a experienced Systems Operator/Admin rather then a Cryptologist.
Lets build our own, make it work and push it live and see what crumbles. I amware that TLS and the likes are all very specialist and to the level as you wouldn't give a Ferrari owner keys to a Jet Fighter. What did spark the flame for me is that be "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way. Maybe because I know little on the matter, that is what makes a large problem.
Learn something new everyday. Thank you for your time on the matter.
A good attitude for a weekend project, but not a good attitude for cybersecurity.
> "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way.
It's still very beneficial for it to be Free and Open Source. It can be studied by hobbyists and students, it can be audited and analyzed by anyone at all, or even by automated systems.
Glad it's been helpful.
I'd much rather inherit a project that contains 'sprintf("%010d", i)', or equivalent, than one with yet another dependency. Especially when repeated a thousand times over the course on an entire project.
Every dependency is one that I must understand all the possible return types, failure modes, and upgrade paths. Every function is one that I must read. The needle must be quite clearly on the benefits side in order to be worth it.
I've encountered ~100% custom codebases that are unmaintainable, and I've encountered ~100% custom codebases that are extremely maintainable, even for a new person joining a team. And we're not talking about leftPad-esque libs; entire frameworks made in-house.
It's rare, and there are reasons to avoid it (locking yourself into your own tech, not benefiting from community innovation, not being "on trend" with common dev conventions). But ease of maintenance is not one of those reasons. If it's hard to maintain, that's on you.
I do agree fully that programmers should strive to understand every layer, and building your own tools is a great way to get experience that helps with that. Yes, left-pad is unnecessarily granular abstraction. However, in each domain, we inevitably depend on other software, and that breadth of that graph is part of the reason for why you can accomplish so much with so few people writing software.
I think that over the time all the software starts creeping with bunch of features that most people dont want or need and strange decisions from the past hunts it.
Sometimes an existing wheel design is perfect for our use case, or at least good enough that creating a new one isn't worth the effort. Sometimes a tweaked wheel is what we need. Sometimes a radical departure from established wheel design is required to make a new machine possible.
Build vs buy is an eternal question, and it's reasonable to examine and reexamine it for any project. But "reinvent the wheel" is just hyperbole.
Your complaint about the phrase seems to be that it doesn't well describe the process of adjusting another design to work better for a specific use-case. But that's not what the phrase is trying to describe, it's literally describing the problem where people reinvent that which absolutely does not need to be reinvented.
You greatly overestimate people's knowledge of wheels.
In the case of rewriting an application feature for feature in another language, that may or may not be a good idea. Maybe we're switching from an interpreted language to a compiled language because we need better performance. Maybe we're replacing a huge pile of impenetrable bash with a well organized Python app. Maybe we're taking advantage of a library that's only available in another language. Maybe there's a good reason for the rewrite but it's so much work that it's still not worth it. Maybe it's just a engineers wanting to use the latest toys because it'll look good on their resumes.
It's totally legit to question the need for a given project. But the phrase "reinvent the wheel" is just scornful insult. It adds nothing to the discussion, and in fact makes it harder to discuss trade-offs.
Because, let's face it, most software out there (especially most software built by individual hobbyists) is ... let's say not great.
I realize that this is a cultural thing. In the US for example they tend to favor soft language more. They would call it "beta", "early access" or "personal project".
As an European I tend to see it more as... well, "shitty".
And that is fine. It is in fact refreshing not to have to filter out the marketing buzzwords like "innovative" or "minimalistic". The title immediately put me in a good disposition towards the author. It feels honest.
When I peek into your average open source project (made, mostly, by hobbyists) I see good to great quality. Whenever any private code gets leaked, or on your average company, or just inspecting websites, you see the horrors. Just my experience here :)
 Spaniard, I do believe our websites might be on the bottom of quality of Europe though, so my experience might be different from other "European"s :)
In general I agree with you, open source tends to have better quality than proprietary software.
I will point out however that Open Source sees a lot of contribution from non-hobbyists too. That's my case, in fact: my salary for the last 10 years came from working on open source projects.
But, it works, it has rare features I need, features that AFAIK most of the famous ones do not and could not be easily added (well, at least when I checked)
That said, while it started 100% hand written I did add stuff over the years. For example I had my own templating system but switched to 'handlebars' at some point. I don't actually remember if there was a good reason or not. A long time ago I wanted to add RSS/Atom support. Sure that should be easy to do on my own (i've done it in the past) but whatever library I looked into was easy without too many dependencies. The latest, I added jsdom to process a few more automated transformations.
The word "shitty" really does describe the code and I'm embarrassed to show it really but it works and I have better things to do than refactor to make it clean.
I would just use "personal".