Hacker News new | past | comments | ask | show | jobs | submit login

"don't reinvent the wheel" is specifically for projects which other people will be forced to maintain.

You can reinvent the wheel as many times you want. If you however do that at work, your colleagues and your replacement after you left the job likely won't think too kindly of you.




I think that developers, albeit knowing about "don't reinvent the wheel", will do it nonetheless. I think psychological factors play a role here, like the "IKEA effect":

> The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created. The name refers to Swedish manufacturer and furniture retailer IKEA, which sells many items of furniture that require assembly. A 2011 study found that subjects were willing to pay 63% more for furniture they had assembled themselves, than for equivalent pre-assembled items.

https://en.wikipedia.org/wiki/IKEA_effect

I think this effect also applies to developers. They tend to value self-written applications subconsciously higher than "pre-assembled" applications.


I value self-written applications consciously. Because I know everything about them, I can quickly change everything, adapt it as I need, fix any issues. That's not true for other applications and might be impossible for proprietary ones.

I don't think that this is the same about Ikea, though. If I bought Ikea table, I can't really adapt it, unless I'm advanced woodsman with proper tools. If I'm just ordinary guy with screwdriver, I can just follow the script to build that table.

I think that Ikea analogue is someone building his vim environment from other's plugins following guides. He does not know C, so he can't hack vim, he does not know vim configuration language, so he can't hack plugins.


As a developer you will always use tools and rely on applications others have written. If you write your program in C, then you use a C compiler like gcc and most developers have not read the whole source code of gcc. Same is true for the C library you use, including standard library, you rely on others developers code you have never seen.

Therefore, I think the IKEA effect holds true in software development, because you rely on "pre-assembled" items, but I agree that you have much larger degree or freedom on how you build your application in e.g. C compared to a IKEA build. Maybe, this higher degree of freedom even strengthens the effect.


While I don't disagree, there might be many other issues:

- Frameworks and tools are almost always ill-fitting. You have many features you don't need, you lack many features you need. You have dependencies you don't want, code that you don't know, models that might not match your own (and might not be explained well enough, or might not even be consistent with themselves anyway).

- Frameworks and tools are often leaky. They don't solve problems perfectly anyway. They substitute the problem of having to develop a custom solution to the problem with having to figure out how to use the tool effectively, plus any problems derived of the shortcomings of the tool itself, which are often harder to solve on a tool you didn't develop yourself. The better you want to do something, the more disappointed you become with leaky solutions. And other people leaks are far more uncomfortable than your own leaks.

- Some people just want to have control and know what they are doing, even if they need to put much more work on it. Not productive, but more consistent.

- Writing your own tools deepens your understanding of the landscape. This is particularly significant in a world where we have hundreds of frameworks and tools, but very few coherent explanations of the context they work on and the problems they really try to solve. We have reached the point where if someone wants to make a non-trivial website, we tell them to learn React instead of sending them to learn how the internet and websites work so they can choose a tool for their needs afterwards. And this piles up and giving coherent explanations of any landscape becomes harder because no landscape is coherent anymore.

- Related to the previous, there's a certain trend that emerges where the more dependent we become on frameworks and tools, the more the problems in a given scenario are ignored or solved by adding more abstration layers and fancier features on tools, instead of working on the root underlying issues, and the problems become fuzzier. Working on the actual problems by yourself helps you see how much cruft and dirt there is behind even well established technologies... and some of us are masochists and want to know about all that sh*t, because if it remains hidden there's little chance to fix it.


> Frameworks and tools are often leaky. They don't solve problems perfectly anyway.

How about compilers?


You seem very focused on languages and compilers. Honestly, those are the basic tools we use as programmers, and they are designed to be turing complete, comfortable to use and very flexible. Yet, we still discuss a lot about them. They are a "tool" that we use a lot, so we are very concerned and particular about them. But yeah, you can't compare frameworks to compilers or programming languages. They are on very different categories.


> But yeah, you can't compare frameworks to compilers or programming languages. They are on very different categories.

That's not what I'm trying to do (to compare them), but people who like to write things from scratch often blame frameworks e.g. they think they are leaky or don't solve problems perfectly anyway.

My question is why they think they can rely on interpreters or compilers, because they also have limitations, bugs, CVEs, leaky abstractions and other known problems. They also form an unknown variable, because most haven't read the source of the compiler or interpreter they use.


What is the likelihood of finding a compiler bug vs a bug in a framework (or a missing method, or undesired side-effect, etc ...) I suspect that for most types of software work many orders of magnitude more for the former. Software developers can go their whole career without encountering a compiler bug and can basically assume it is flawless even they they read blog posts and change-logs informing them that it is not. But they likely will encounter an issue on every single project they have their framework that irks them. Maybe this is just my niche of software though (web-based, enterprise, saas type software). I could imagine those working on system software in lower-level languages fighting the compiler much more.


There has been thousands of issues/bugs over the years in gcc, glibc and the kernel and there are thousands to come, same like in every big software. I don't think there is any evidence that compilers and interpreters are an exception and generally have less bugs than any other piece of software. It is also very much possible to use a non-compiler/interpreter software for years and never encounter a bug.


true, but it is a balance. If you import packages for things that can be done in one small function, you force them a lot of dependencies, which can be worse.


I think this depends on the specifics. It's not always just about maintainability.

You shouldn't reinvent Django if other members of your team already know Django, as you're imposing a learning cost in doing so.

You rather obviously shouldn't reinvent the Linux kernel, as doing so is an enormous undertaking. It's probably a huge waste of resources, your solution will probably be far inferior, and you're likely to fail anyway.

You shouldn't reimplement TLS, as you will almost certainly do so incorrectly, and it would take a great deal of effort. It's a waste of resources and will worsen your cybersecurity risk profile.


I disagree. It's not recommended that you should reimplement TLS. "Shouldn't" is the wrong word in use.

Why should you not? You only learn by developing. Precautions follow with such, as likes not running in production. But how else do you learn if you do not?

If your interested in learning about secure sockets and the rest, you should reimplement TLS. Because that will teach you the inner workings of TLS.

The vibe "you shouldn't" crushes the development side of things.


Sure, reimplementing TLS is a great way to learn the protocol. We were discussing reimplementing things with an eye to then using that implementation for serious work, but yes, reimplementing as a learning exercise is an exception, provided you never use your toy implementation in production.

> "Shouldn't" is the wrong word to use. Why should you not?

I'll assume that here you're not referring to reimplementing purely as a learning exercise.

I already said why: you will almost certainly do so incorrectly, and this will worsen your cybersecurity risk profile. For serious work, you should use a mature battle-tested implementation like everyone else. The slightest error in a cryptography codebase can lead to severe security vulnerabilities. This is well understood and much has been written on the topic of don't roll your own crypto, e.g. https://security.stackexchange.com/a/18198/

> The vibe "you shouldn't" crushes the development side of things.

No, it's solid cybersecurity advice. It's irresponsible to make use of either an amateur cryptographic scheme, or an amateur implementation of a cryptographic scheme, in a serious codebase. Reimplementing TLS purely as a learning exercise is of course still to be encouraged.

More generally, there is no culture of you shouldn't posing a problem in the software development world. Nuclear engineers and bridge engineers are taught you shouldn't... but software developers are taught just go for it.


Agreeable remarks. I was heading with the educational view point but is still thrown to the educational side just the same. I ask how else do you move to production if you don't take your educational work to the next level?

If I create a mature, well thought piece of encryption. There is no irresponsibly in wanting to use it in production. Yes, I should have it validated and if that validation comes back as something with no thought, buggy and disastrous then that would be irresponsible of me. And one could argue it's irresponsible to run it without validation too. To which I would agree upon.

I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.

Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.

"You should create your own implementation of TLS but you should should not run it in production as that would be irresponsible until you have validation". Not just to blanket "you should t" end of. Above is what should be passed to developers. If you feel your work is good enough, then pay the price for validation.

Besides what makes an expert if you don't make it for yourself?


> I ask how else do you move to production if you don't take your educational work to the next level?

> What makes an expert if you don't make it for yourself?

You study the field, like any other field. This doesn't collide with what I've said.

In this context there needs to be a fairly bright line between learning, and producing real-world cryptographic systems. It might be instructive to have engineering students build an airbag system, but you don't then put it in your car.

> If I create a mature, well thought piece of encryption

Unless you're a professional cryptographer, someone like Bruce Schneier, Tanja Lange, or Filippo Valsorda, it's best to assume that you haven't created a well thought out cryptographic solution. See Schneier's Law. [0] If you have a PhD related to cryptography, and/or a history of employment as a cryptography specialist at a major technology company, then you may have a solid enough grasp of the field to be taken seriously, but short of that, you should leave cryptography to the experts.

It's really hard to get the theory just right, and it's also really hard to get the implementation just right. Fortunately there are existing out-of-the-box solutions that do all the things we want: secure channels, secure file encryption, authentication, etc.

> Yes, I should have it validated

We have a validation process: standards bodies. For instance, in the TLS 1.3 standard, they introduced the requirement for supporting the x25519 algorithm. That algorithm was developed by a team of professional cryptographers, not by a well-meaning dabbler, and it has been subject to careful scrutiny by the cryptographic community.

After standardisation, we see the algorithm implemented in the few trustworthy TLS libraries (e.g. OpenSSL and Google's Tink), which we then adopt for use in the real world.

Serious organisations do not play around with this stuff, they only use trusted standard algorithms. Microsoft/Apple/Amazon/Google have crypto teams who are qualified to write their own implementations of the standard algorithms. The rest of us then use those implementations. Microsoft's Active Directory is backed by the standard Kerberos crypto protocol, for instance.

> I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.

We agree in a sense, it's just that the bar for considering it battle proven is set very, very high, and for good reason. Developing crypto isn't like styling a webpage with CSS. It's technically challenging to do correctly, it's difficult to know if you've done it correctly, and the consequences of getting it wrong are severe.

> Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.

I don't know what 'validation' is meant to mean here. If we had a way to easily detect such issues, we would use it. Again, it's extremely difficult to get this stuff just right. The smallest defect can have terrible consequences.

This applies even when we're doing everything correctly. As you say, we see issues even in major implementations. That doesn't mean that using amateur crypto code is a good idea. It isn't. Every cryptographer agrees that it's a terrible idea to do that. Sometimes aeronautical engineers build dangerous aircraft, but that doesn't mean we let amateurs have a go.

> If you feel your work is good enough, then pay the price for validation.

That isn't how it works. Cryptography is an academic discipline, it makes advances through slow-moving academic publishing and standards bodies, not by paying for a code-review. If you really want to make a contribution to the field, you'll need to make a career of it.

Again though, in a sense there's little need. Much of the best crypto software in the world is Free and Open Source.

If you want to implement TLS as an exercise, or make a neat cryptographic 'toy' program of some sort, then great, but don't gamble anything of value on it (user data, say).

[0] https://www.schneier.com/blog/archives/2011/04/schneiers_law...


It is most interesting, especially with regards to other teams.

My mind is primarily focused on the practical viewpoint of a experienced Systems Operator/Admin rather then a Cryptologist.

Lets build our own, make it work and push it live and see what crumbles. I amware that TLS and the likes are all very specialist and to the level as you wouldn't give a Ferrari owner keys to a Jet Fighter. What did spark the flame for me is that be "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way. Maybe because I know little on the matter, that is what makes a large problem.

Learn something new everyday. Thank you for your time on the matter.


> Lets build our own, make it work and push it live and see what crumbles.

A good attitude for a weekend project, but not a good attitude for cybersecurity.

> "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way.

It's still very beneficial for it to be Free and Open Source. It can be studied by hobbyists and students, it can be audited and analyzed by anyone at all, or even by automated systems.

Glad it's been helpful.


That analogy only stretches so far.

I'd much rather inherit a project that contains 'sprintf("%010d", i)', or equivalent, than one with yet another dependency. Especially when repeated a thousand times over the course on an entire project.

Every dependency is one that I must understand all the possible return types, failure modes, and upgrade paths. Every function is one that I must read. The needle must be quite clearly on the benefits side in order to be worth it.


If your colleagues and replacements can't maintain the code you wrote, the fact that you didn't use 3rd-party code instead is not the problem.


Addendum: just to clarify, installing 3rd-party library dependencies is a good idea. There are many good reasons to do so. This just isn't one of them.

I've encountered ~100% custom codebases that are unmaintainable, and I've encountered ~100% custom codebases that are extremely maintainable, even for a new person joining a team. And we're not talking about leftPad-esque libs; entire frameworks made in-house.

It's rare, and there are reasons to avoid it (locking yourself into your own tech, not benefiting from community innovation, not being "on trend" with common dev conventions). But ease of maintenance is not one of those reasons. If it's hard to maintain, that's on you.


Well, in that case you should document your code extensively. Document the code in the source code (but please do not have your files in a 90% comment 10% code ratio), and document it elsewhere, too, so your code will not be filled with documentation. I truly hate going through code that has more documentation than code. I often just use a tool that moves all the comments to a separate file.


Maybe it shouldn't be "don't reinvent the wheel", but rather "reinvent the wheel well"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: