You can reinvent the wheel as many times you want. If you however do that at work, your colleagues and your replacement after you left the job likely won't think too kindly of you.
> The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created. The name refers to Swedish manufacturer and furniture retailer IKEA, which sells many items of furniture that require assembly. A 2011 study found that subjects were willing to pay 63% more for furniture they had assembled themselves, than for equivalent pre-assembled items.
I think this effect also applies to developers. They tend to value self-written applications subconsciously higher than "pre-assembled" applications.
I don't think that this is the same about Ikea, though. If I bought Ikea table, I can't really adapt it, unless I'm advanced woodsman with proper tools. If I'm just ordinary guy with screwdriver, I can just follow the script to build that table.
I think that Ikea analogue is someone building his vim environment from other's plugins following guides. He does not know C, so he can't hack vim, he does not know vim configuration language, so he can't hack plugins.
Therefore, I think the IKEA effect holds true in software development, because you rely on "pre-assembled" items, but I agree that you have much larger degree or freedom on how you build your application in e.g. C compared to a IKEA build. Maybe, this higher degree of freedom even strengthens the effect.
- Frameworks and tools are almost always ill-fitting. You have many features you don't need, you lack many features you need. You have dependencies you don't want, code that you don't know, models that might not match your own (and might not be explained well enough, or might not even be consistent with themselves anyway).
- Frameworks and tools are often leaky. They don't solve problems perfectly anyway. They substitute the problem of having to develop a custom solution to the problem with having to figure out how to use the tool effectively, plus any problems derived of the shortcomings of the tool itself, which are often harder to solve on a tool you didn't develop yourself. The better you want to do something, the more disappointed you become with leaky solutions. And other people leaks are far more uncomfortable than your own leaks.
- Some people just want to have control and know what they are doing, even if they need to put much more work on it. Not productive, but more consistent.
- Writing your own tools deepens your understanding of the landscape. This is particularly significant in a world where we have hundreds of frameworks and tools, but very few coherent explanations of the context they work on and the problems they really try to solve. We have reached the point where if someone wants to make a non-trivial website, we tell them to learn React instead of sending them to learn how the internet and websites work so they can choose a tool for their needs afterwards. And this piles up and giving coherent explanations of any landscape becomes harder because no landscape is coherent anymore.
- Related to the previous, there's a certain trend that emerges where the more dependent we become on frameworks and tools, the more the problems in a given scenario are ignored or solved by adding more abstration layers and fancier features on tools, instead of working on the root underlying issues, and the problems become fuzzier. Working on the actual problems by yourself helps you see how much cruft and dirt there is behind even well established technologies... and some of us are masochists and want to know about all that sh*t, because if it remains hidden there's little chance to fix it.
How about compilers?
That's not what I'm trying to do (to compare them), but people who like to write things from scratch often blame frameworks e.g. they think they are leaky or don't solve problems perfectly anyway.
My question is why they think they can rely on interpreters or compilers, because they also have limitations, bugs, CVEs, leaky abstractions and other known problems. They also form an unknown variable, because most haven't read the source of the compiler or interpreter they use.
You shouldn't reinvent Django if other members of your team already know Django, as you're imposing a learning cost in doing so.
You rather obviously shouldn't reinvent the Linux kernel, as doing so is an enormous undertaking. It's probably a huge waste of resources, your solution will probably be far inferior, and you're likely to fail anyway.
You shouldn't reimplement TLS, as you will almost certainly do so incorrectly, and it would take a great deal of effort. It's a waste of resources and will worsen your cybersecurity risk profile.
Why should you not? You only learn by developing. Precautions follow with such, as likes not running in production. But how else do you learn if you do not?
If your interested in learning about secure sockets and the rest, you should reimplement TLS. Because that will teach you the inner workings of TLS.
The vibe "you shouldn't" crushes the development side of things.
> "Shouldn't" is the wrong word to use. Why should you not?
I'll assume that here you're not referring to reimplementing purely as a learning exercise.
I already said why: you will almost certainly do so incorrectly, and this will worsen your cybersecurity risk profile. For serious work, you should use a mature battle-tested implementation like everyone else. The slightest error in a cryptography codebase can lead to severe security vulnerabilities. This is well understood and much has been written on the topic of don't roll your own crypto, e.g. https://security.stackexchange.com/a/18198/
> The vibe "you shouldn't" crushes the development side of things.
No, it's solid cybersecurity advice. It's irresponsible to make use of either an amateur cryptographic scheme, or an amateur implementation of a cryptographic scheme, in a serious codebase. Reimplementing TLS purely as a learning exercise is of course still to be encouraged.
More generally, there is no culture of you shouldn't posing a problem in the software development world. Nuclear engineers and bridge engineers are taught you shouldn't... but software developers are taught just go for it.
If I create a mature, well thought piece of encryption. There is no irresponsibly in wanting to use it in production. Yes, I should have it validated and if that validation comes back as something with no thought, buggy and disastrous then that would be irresponsible of me. And one could argue it's irresponsible to run it without validation too. To which I would agree upon.
I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.
Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.
"You should create your own implementation of TLS but you should should not run it in production as that would be irresponsible until you have validation". Not just to blanket "you should t" end of. Above is what should be passed to developers. If you feel your work is good enough, then pay the price for validation.
Besides what makes an expert if you don't make it for yourself?
> What makes an expert if you don't make it for yourself?
You study the field, like any other field. This doesn't collide with what I've said.
In this context there needs to be a fairly bright line between learning, and producing real-world cryptographic systems. It might be instructive to have engineering students build an airbag system, but you don't then put it in your car.
> If I create a mature, well thought piece of encryption
Unless you're a professional cryptographer, someone like Bruce Schneier, Tanja Lange, or Filippo Valsorda, it's best to assume that you haven't created a well thought out cryptographic solution. See Schneier's Law.  If you have a PhD related to cryptography, and/or a history of employment as a cryptography specialist at a major technology company, then you may have a solid enough grasp of the field to be taken seriously, but short of that, you should leave cryptography to the experts.
It's really hard to get the theory just right, and it's also really hard to get the implementation just right. Fortunately there are existing out-of-the-box solutions that do all the things we want: secure channels, secure file encryption, authentication, etc.
> Yes, I should have it validated
We have a validation process: standards bodies. For instance, in the TLS 1.3 standard, they introduced the requirement for supporting the x25519 algorithm. That algorithm was developed by a team of professional cryptographers, not by a well-meaning dabbler, and it has been subject to careful scrutiny by the cryptographic community.
After standardisation, we see the algorithm implemented in the few trustworthy TLS libraries (e.g. OpenSSL and Google's Tink), which we then adopt for use in the real world.
Serious organisations do not play around with this stuff, they only use trusted standard algorithms. Microsoft/Apple/Amazon/Google have crypto teams who are qualified to write their own implementations of the standard algorithms. The rest of us then use those implementations. Microsoft's Active Directory is backed by the standard Kerberos crypto protocol, for instance.
> I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.
We agree in a sense, it's just that the bar for considering it battle proven is set very, very high, and for good reason. Developing crypto isn't like styling a webpage with CSS. It's technically challenging to do correctly, it's difficult to know if you've done it correctly, and the consequences of getting it wrong are severe.
> Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.
I don't know what 'validation' is meant to mean here. If we had a way to easily detect such issues, we would use it. Again, it's extremely difficult to get this stuff just right. The smallest defect can have terrible consequences.
This applies even when we're doing everything correctly. As you say, we see issues even in major implementations. That doesn't mean that using amateur crypto code is a good idea. It isn't. Every cryptographer agrees that it's a terrible idea to do that. Sometimes aeronautical engineers build dangerous aircraft, but that doesn't mean we let amateurs have a go.
> If you feel your work is good enough, then pay the price for validation.
That isn't how it works. Cryptography is an academic discipline, it makes advances through slow-moving academic publishing and standards bodies, not by paying for a code-review. If you really want to make a contribution to the field, you'll need to make a career of it.
Again though, in a sense there's little need. Much of the best crypto software in the world is Free and Open Source.
If you want to implement TLS as an exercise, or make a neat cryptographic 'toy' program of some sort, then great, but don't gamble anything of value on it (user data, say).
My mind is primarily focused on the practical viewpoint of a experienced Systems Operator/Admin rather then a Cryptologist.
Lets build our own, make it work and push it live and see what crumbles. I amware that TLS and the likes are all very specialist and to the level as you wouldn't give a Ferrari owner keys to a Jet Fighter. What did spark the flame for me is that be "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way. Maybe because I know little on the matter, that is what makes a large problem.
Learn something new everyday. Thank you for your time on the matter.
A good attitude for a weekend project, but not a good attitude for cybersecurity.
> "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way.
It's still very beneficial for it to be Free and Open Source. It can be studied by hobbyists and students, it can be audited and analyzed by anyone at all, or even by automated systems.
Glad it's been helpful.
I'd much rather inherit a project that contains 'sprintf("%010d", i)', or equivalent, than one with yet another dependency. Especially when repeated a thousand times over the course on an entire project.
Every dependency is one that I must understand all the possible return types, failure modes, and upgrade paths. Every function is one that I must read. The needle must be quite clearly on the benefits side in order to be worth it.
I've encountered ~100% custom codebases that are unmaintainable, and I've encountered ~100% custom codebases that are extremely maintainable, even for a new person joining a team. And we're not talking about leftPad-esque libs; entire frameworks made in-house.
It's rare, and there are reasons to avoid it (locking yourself into your own tech, not benefiting from community innovation, not being "on trend" with common dev conventions). But ease of maintenance is not one of those reasons. If it's hard to maintain, that's on you.