Hacker News new | past | comments | ask | show | jobs | submit login

I completely understand the desire to build something practical that's your own.

I just built a small web application framework that I can build some basic personal apps on top of. Just like the author's, it's "shitty". It lacks features, doesn't handle edge cases, isn't particularly secure or performant.

None of that matters. It's locked up behind HTTP authentication and it's for my use only. It has exactly the features I desire, and none that I don't. I understand it top to bottom. It was fun to write and extend. It serves my needs perfectly. I didn't have to fight with, or conform to someone else's notion of how software should be built.

I wish I could upvote this multiple times.

You go into any experienced woodworker’s shop, you can divide it into maybe 20% of the tools that see the vast majority of the action—table saws, band saws, milling machines, proper hammers. But then there is a halo of 60% or more of the things there are little helpful things—doodads, mallets, jigs, often even shelving and work surfaces—such that she uses a bunch of these on any build she undertakes, and she made them herself. “Why would I stumble around with someone else's mallet when it’s about the same cost and about half a day’s work to machine my own mallet down in a router that has exactly the weight and shape that I like it to have? Pays for itself.”

We are so far from this ideal. All of our software breaks because we imported left-pad and somebody unpublished left-pad. Wasn't even a world changing perfectly V8-on-x64-tuned performant version of left-pad, but we gotta “not reinvent the wheel.”

I don't think we do it for no reason, there is something about software itself right now that is amateurish. We’re still writing subroutines working on data structures; our brains are FORTRAN even if we call it Java, and whatever that is makes it really hard to decouple the tools that we are using to build software from the final outcome that has been built. The woodworker has a sense of the “shape” of the artifact, we routinely build the equivalent of gorgeous wooden tables that start spewing garbage onto their shiny surfaces after three weeks unattended, “oh, I didn’t realize that the built-in trash chutes could get clogged.” Wait—what? “Yeah, we allocate more wood at the bottom of the table to make up for the wood that we remove at the top of the table every night so that the surface is always clean.” Your user doesn't clean the table? “Hahaha you’re silly, users could not possibly have the technical knowledge to keep anything clean, they are not trained in sponges, and I especially could not trust the with something as beautiful and delicate as this table.” The problem has a beautiful sort of fractal quality to it, every zoom-in is more WTF.

All analogies fall down at some point, but the woodworker analogy you make is just not correct, in my opinion.

The problem is that many things a developer considers tools (as in: "the best tool for the job") are actually components.

The equivalent of a woodworker's tool for a software developer would be an editor. Only you or your team might care about which particular brand of sawing machine or editor you use. And they are not an integral part of the end result.

A component for a woodworker would be a hinge or drawer rails. You would probably not appreciate it if your woodworker uses hinges that they made themselves. They are not worth the time to build. They're probably not very reliable. And if they fail they might not be easily replaced or repaired except by the person that made them. So you would only use custom components if the part you need is otherwise unavailable or does not meet specific needs.

Are you a woodworker and making a cabinet for yourself? Go ahead and make your own hinges, but for paid projects you probably should rely on standard components as much as possible; only doing custom woodwork or software engineering where it adds value.

For software you're more likely to get a hinge from CarpentryHub.

Hinges - like everything else on CarpentryHub - are either in a state of permanent development or abandoned. So you design your project for Hinge 13.2, but then you come back a month later - just before you open your Etsy store - and discover that Hinge x.n has now become HingeOS, which is breaking change.

It also has its own HingeAPI, which is poorly documented, somewhere.

You were used to Hinge's ScrewHole 5.3. But that's now buried inside HingeAPI, which calculates your project budget and lifetime carbon footprint for you, but no longer accepts external screwhole positions, because it works its own values from the complex project specification you have to supply - defined in DirkLang, which is a hot new carpentry specification language only ten people know, and replaces HammerLang, which was fifty five years old and very popular but had a lot of frankly questionable design choices.

And the values HingeAPI returns aren't mutable, because mutability is bad practice.

> CarpentryHub

I too look forward to the 2021 YC batch

As a wood worker I often build things that make my current set of tools work better. Router that I own Let me make a router table that fits in my workspace and has the features I need, now the router can be used in other ways. Want to cut circles, make a circle cutter for my router. Dovetails, jig to make that with my router. All of the tools are add on to a pretty basic tool, and they are set up for me.

To go back to the static website, I really like the Wiki format (Current favorite is http://pmwiki.org ). I picked it for three reasons 1) It's a very powerful wiki 2) I can convert the site into static pages. This lets me build in the wiki format, then publish out. 3) There is a cookbook (a tool another user wrote) to make PDF. You can select the pages in page order and produce a PDF. To that base I've added my custom items.

Pmwiki in it's own right is simple (a router), but allows for cookbooks (custom extensions) that others can add. One of the interesting things is the current lead developer doesn't really add features to the core, they also create cookbooks to add on. And the core is pretty soft, if you don't like the markup, you can change it.

A long way to go to say that I can see writing software like the OP article. But if you should look at a core tool and see if you can build around that. @Molf uses hinges as an example. You can do a lot around a stock hinge (paint, bevel the edges, add "hand crafted tool marks", a process that is easier to do than starting from ground zero.

I think the analogy is tortuous, but parent has a fair point -- it's just that left pad is maybe a bad example. If we were to go with components over tools, I'd say very simple building blocks are very useful as components. It's more complex dependencies where the issue lies.

To absolutely torture the analogy: say you have a general hinge that you can attach to any door. And that's fantastic, but it's a lot more complex than it needs to be (it being generalised necessitates this) and it isn't actually completely optimal in most cases (it just works ok, ish). You can use Super Hinge on all your paid projects, and it will work, it's just that it's often better to make a hinge that exactly fits the required specifications.

The other thing this makes me think -- custom built carpentry is currently very expensive, for real. If you also want all the components to be custom-designed and bespoke, it's only the wealthiest who will be able afford it.

There are for sure economic factors behind choices in how software is built, for sure.

Back to the analogy and actual tools -- what employer is going to pay you to write your own editor to use to write the software the employer wants? It's not just that they are mean and choose not to pay you for this, it is not a cost they could sustain (for all but the biggest employers I guess. Google, sure, can have an editor team. Still can't let every individual programmer build their own editor)

On the other-hand, most woodworkers I know (unlike, say construction workers) try to use joinery when possible. The joinery is usually custom built by necessity, often assisted with a mix of home-built or purchased jigs. The one exception are dowels, which are standard and very generalized. I think that the software analogy might be design patterns.

You can also think of lower-level components, like nails and glue, which even more general and standardized than something like a hinge, maybe these are like standard-library or builtins for a language?

Edit: dropped connection, double post

"don't reinvent the wheel" is specifically for projects which other people will be forced to maintain.

You can reinvent the wheel as many times you want. If you however do that at work, your colleagues and your replacement after you left the job likely won't think too kindly of you.

I think that developers, albeit knowing about "don't reinvent the wheel", will do it nonetheless. I think psychological factors play a role here, like the "IKEA effect":

> The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created. The name refers to Swedish manufacturer and furniture retailer IKEA, which sells many items of furniture that require assembly. A 2011 study found that subjects were willing to pay 63% more for furniture they had assembled themselves, than for equivalent pre-assembled items.


I think this effect also applies to developers. They tend to value self-written applications subconsciously higher than "pre-assembled" applications.

I value self-written applications consciously. Because I know everything about them, I can quickly change everything, adapt it as I need, fix any issues. That's not true for other applications and might be impossible for proprietary ones.

I don't think that this is the same about Ikea, though. If I bought Ikea table, I can't really adapt it, unless I'm advanced woodsman with proper tools. If I'm just ordinary guy with screwdriver, I can just follow the script to build that table.

I think that Ikea analogue is someone building his vim environment from other's plugins following guides. He does not know C, so he can't hack vim, he does not know vim configuration language, so he can't hack plugins.

As a developer you will always use tools and rely on applications others have written. If you write your program in C, then you use a C compiler like gcc and most developers have not read the whole source code of gcc. Same is true for the C library you use, including standard library, you rely on others developers code you have never seen.

Therefore, I think the IKEA effect holds true in software development, because you rely on "pre-assembled" items, but I agree that you have much larger degree or freedom on how you build your application in e.g. C compared to a IKEA build. Maybe, this higher degree of freedom even strengthens the effect.

While I don't disagree, there might be many other issues:

- Frameworks and tools are almost always ill-fitting. You have many features you don't need, you lack many features you need. You have dependencies you don't want, code that you don't know, models that might not match your own (and might not be explained well enough, or might not even be consistent with themselves anyway).

- Frameworks and tools are often leaky. They don't solve problems perfectly anyway. They substitute the problem of having to develop a custom solution to the problem with having to figure out how to use the tool effectively, plus any problems derived of the shortcomings of the tool itself, which are often harder to solve on a tool you didn't develop yourself. The better you want to do something, the more disappointed you become with leaky solutions. And other people leaks are far more uncomfortable than your own leaks.

- Some people just want to have control and know what they are doing, even if they need to put much more work on it. Not productive, but more consistent.

- Writing your own tools deepens your understanding of the landscape. This is particularly significant in a world where we have hundreds of frameworks and tools, but very few coherent explanations of the context they work on and the problems they really try to solve. We have reached the point where if someone wants to make a non-trivial website, we tell them to learn React instead of sending them to learn how the internet and websites work so they can choose a tool for their needs afterwards. And this piles up and giving coherent explanations of any landscape becomes harder because no landscape is coherent anymore.

- Related to the previous, there's a certain trend that emerges where the more dependent we become on frameworks and tools, the more the problems in a given scenario are ignored or solved by adding more abstration layers and fancier features on tools, instead of working on the root underlying issues, and the problems become fuzzier. Working on the actual problems by yourself helps you see how much cruft and dirt there is behind even well established technologies... and some of us are masochists and want to know about all that sh*t, because if it remains hidden there's little chance to fix it.

> Frameworks and tools are often leaky. They don't solve problems perfectly anyway.

How about compilers?

You seem very focused on languages and compilers. Honestly, those are the basic tools we use as programmers, and they are designed to be turing complete, comfortable to use and very flexible. Yet, we still discuss a lot about them. They are a "tool" that we use a lot, so we are very concerned and particular about them. But yeah, you can't compare frameworks to compilers or programming languages. They are on very different categories.

> But yeah, you can't compare frameworks to compilers or programming languages. They are on very different categories.

That's not what I'm trying to do (to compare them), but people who like to write things from scratch often blame frameworks e.g. they think they are leaky or don't solve problems perfectly anyway.

My question is why they think they can rely on interpreters or compilers, because they also have limitations, bugs, CVEs, leaky abstractions and other known problems. They also form an unknown variable, because most haven't read the source of the compiler or interpreter they use.

What is the likelihood of finding a compiler bug vs a bug in a framework (or a missing method, or undesired side-effect, etc ...) I suspect that for most types of software work many orders of magnitude more for the former. Software developers can go their whole career without encountering a compiler bug and can basically assume it is flawless even they they read blog posts and change-logs informing them that it is not. But they likely will encounter an issue on every single project they have their framework that irks them. Maybe this is just my niche of software though (web-based, enterprise, saas type software). I could imagine those working on system software in lower-level languages fighting the compiler much more.

There has been thousands of issues/bugs over the years in gcc, glibc and the kernel and there are thousands to come, same like in every big software. I don't think there is any evidence that compilers and interpreters are an exception and generally have less bugs than any other piece of software. It is also very much possible to use a non-compiler/interpreter software for years and never encounter a bug.

true, but it is a balance. If you import packages for things that can be done in one small function, you force them a lot of dependencies, which can be worse.

I think this depends on the specifics. It's not always just about maintainability.

You shouldn't reinvent Django if other members of your team already know Django, as you're imposing a learning cost in doing so.

You rather obviously shouldn't reinvent the Linux kernel, as doing so is an enormous undertaking. It's probably a huge waste of resources, your solution will probably be far inferior, and you're likely to fail anyway.

You shouldn't reimplement TLS, as you will almost certainly do so incorrectly, and it would take a great deal of effort. It's a waste of resources and will worsen your cybersecurity risk profile.

I disagree. It's not recommended that you should reimplement TLS. "Shouldn't" is the wrong word in use.

Why should you not? You only learn by developing. Precautions follow with such, as likes not running in production. But how else do you learn if you do not?

If your interested in learning about secure sockets and the rest, you should reimplement TLS. Because that will teach you the inner workings of TLS.

The vibe "you shouldn't" crushes the development side of things.

Sure, reimplementing TLS is a great way to learn the protocol. We were discussing reimplementing things with an eye to then using that implementation for serious work, but yes, reimplementing as a learning exercise is an exception, provided you never use your toy implementation in production.

> "Shouldn't" is the wrong word to use. Why should you not?

I'll assume that here you're not referring to reimplementing purely as a learning exercise.

I already said why: you will almost certainly do so incorrectly, and this will worsen your cybersecurity risk profile. For serious work, you should use a mature battle-tested implementation like everyone else. The slightest error in a cryptography codebase can lead to severe security vulnerabilities. This is well understood and much has been written on the topic of don't roll your own crypto, e.g. https://security.stackexchange.com/a/18198/

> The vibe "you shouldn't" crushes the development side of things.

No, it's solid cybersecurity advice. It's irresponsible to make use of either an amateur cryptographic scheme, or an amateur implementation of a cryptographic scheme, in a serious codebase. Reimplementing TLS purely as a learning exercise is of course still to be encouraged.

More generally, there is no culture of you shouldn't posing a problem in the software development world. Nuclear engineers and bridge engineers are taught you shouldn't... but software developers are taught just go for it.

Agreeable remarks. I was heading with the educational view point but is still thrown to the educational side just the same. I ask how else do you move to production if you don't take your educational work to the next level?

If I create a mature, well thought piece of encryption. There is no irresponsibly in wanting to use it in production. Yes, I should have it validated and if that validation comes back as something with no thought, buggy and disastrous then that would be irresponsible of me. And one could argue it's irresponsible to run it without validation too. To which I would agree upon.

I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.

Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.

"You should create your own implementation of TLS but you should should not run it in production as that would be irresponsible until you have validation". Not just to blanket "you should t" end of. Above is what should be passed to developers. If you feel your work is good enough, then pay the price for validation.

Besides what makes an expert if you don't make it for yourself?

> I ask how else do you move to production if you don't take your educational work to the next level?

> What makes an expert if you don't make it for yourself?

You study the field, like any other field. This doesn't collide with what I've said.

In this context there needs to be a fairly bright line between learning, and producing real-world cryptographic systems. It might be instructive to have engineering students build an airbag system, but you don't then put it in your car.

> If I create a mature, well thought piece of encryption

Unless you're a professional cryptographer, someone like Bruce Schneier, Tanja Lange, or Filippo Valsorda, it's best to assume that you haven't created a well thought out cryptographic solution. See Schneier's Law. [0] If you have a PhD related to cryptography, and/or a history of employment as a cryptography specialist at a major technology company, then you may have a solid enough grasp of the field to be taken seriously, but short of that, you should leave cryptography to the experts.

It's really hard to get the theory just right, and it's also really hard to get the implementation just right. Fortunately there are existing out-of-the-box solutions that do all the things we want: secure channels, secure file encryption, authentication, etc.

> Yes, I should have it validated

We have a validation process: standards bodies. For instance, in the TLS 1.3 standard, they introduced the requirement for supporting the x25519 algorithm. That algorithm was developed by a team of professional cryptographers, not by a well-meaning dabbler, and it has been subject to careful scrutiny by the cryptographic community.

After standardisation, we see the algorithm implemented in the few trustworthy TLS libraries (e.g. OpenSSL and Google's Tink), which we then adopt for use in the real world.

Serious organisations do not play around with this stuff, they only use trusted standard algorithms. Microsoft/Apple/Amazon/Google have crypto teams who are qualified to write their own implementations of the standard algorithms. The rest of us then use those implementations. Microsoft's Active Directory is backed by the standard Kerberos crypto protocol, for instance.

> I still disbelieve that those who create something being told: "you should not be allowed in production ever" and only run "battle-proven" is the wrong approach. Create something, validate it and then run it.

We agree in a sense, it's just that the bar for considering it battle proven is set very, very high, and for good reason. Developing crypto isn't like styling a webpage with CSS. It's technically challenging to do correctly, it's difficult to know if you've done it correctly, and the consequences of getting it wrong are severe.

> Besides battle-proven systems had to be ran in production in the first place and that those battle-tested systems still have vulnerabilities themselves, ie: HeartBleed. Again which was immune with Mbed TLS/PolarSSL that someone had reimplemented TLS. Validation is the key that's required.

I don't know what 'validation' is meant to mean here. If we had a way to easily detect such issues, we would use it. Again, it's extremely difficult to get this stuff just right. The smallest defect can have terrible consequences.

This applies even when we're doing everything correctly. As you say, we see issues even in major implementations. That doesn't mean that using amateur crypto code is a good idea. It isn't. Every cryptographer agrees that it's a terrible idea to do that. Sometimes aeronautical engineers build dangerous aircraft, but that doesn't mean we let amateurs have a go.

> If you feel your work is good enough, then pay the price for validation.

That isn't how it works. Cryptography is an academic discipline, it makes advances through slow-moving academic publishing and standards bodies, not by paying for a code-review. If you really want to make a contribution to the field, you'll need to make a career of it.

Again though, in a sense there's little need. Much of the best crypto software in the world is Free and Open Source.

If you want to implement TLS as an exercise, or make a neat cryptographic 'toy' program of some sort, then great, but don't gamble anything of value on it (user data, say).

[0] https://www.schneier.com/blog/archives/2011/04/schneiers_law...

It is most interesting, especially with regards to other teams.

My mind is primarily focused on the practical viewpoint of a experienced Systems Operator/Admin rather then a Cryptologist.

Lets build our own, make it work and push it live and see what crumbles. I amware that TLS and the likes are all very specialist and to the level as you wouldn't give a Ferrari owner keys to a Jet Fighter. What did spark the flame for me is that be "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way. Maybe because I know little on the matter, that is what makes a large problem.

Learn something new everyday. Thank you for your time on the matter.

> Lets build our own, make it work and push it live and see what crumbles.

A good attitude for a weekend project, but not a good attitude for cybersecurity.

> "Free and Opensource" until it comes to crypto which then becomes a touch area of "leave it for the experts" but with full reason makes sense in a way.

It's still very beneficial for it to be Free and Open Source. It can be studied by hobbyists and students, it can be audited and analyzed by anyone at all, or even by automated systems.

Glad it's been helpful.

That analogy only stretches so far.

I'd much rather inherit a project that contains 'sprintf("%010d", i)', or equivalent, than one with yet another dependency. Especially when repeated a thousand times over the course on an entire project.

Every dependency is one that I must understand all the possible return types, failure modes, and upgrade paths. Every function is one that I must read. The needle must be quite clearly on the benefits side in order to be worth it.

If your colleagues and replacements can't maintain the code you wrote, the fact that you didn't use 3rd-party code instead is not the problem.

Addendum: just to clarify, installing 3rd-party library dependencies is a good idea. There are many good reasons to do so. This just isn't one of them.

I've encountered ~100% custom codebases that are unmaintainable, and I've encountered ~100% custom codebases that are extremely maintainable, even for a new person joining a team. And we're not talking about leftPad-esque libs; entire frameworks made in-house.

It's rare, and there are reasons to avoid it (locking yourself into your own tech, not benefiting from community innovation, not being "on trend" with common dev conventions). But ease of maintenance is not one of those reasons. If it's hard to maintain, that's on you.

Well, in that case you should document your code extensively. Document the code in the source code (but please do not have your files in a 90% comment 10% code ratio), and document it elsewhere, too, so your code will not be filled with documentation. I truly hate going through code that has more documentation than code. I often just use a tool that moves all the comments to a separate file.

Maybe it shouldn't be "don't reinvent the wheel", but rather "reinvent the wheel well"

A woodworker would have more off the shelf tools if they were equivalently cheap/good/fast/efficient as a custom tool. Physical objects can’t be so freely copied as a software library.

I do agree fully that programmers should strive to understand every layer, and building your own tools is a great way to get experience that helps with that. Yes, left-pad is unnecessarily granular abstraction. However, in each domain, we inevitably depend on other software, and that breadth of that graph is part of the reason for why you can accomplish so much with so few people writing software.

To be honest, whenever I put up a new table in our house to get some fresh cleared working surface, within a matter of months, said table has accumulated a lot of clutter, spurring me to soon put up another table to get some cleared working space. If only there were some way to return tables to that uncluttered state.

I am using Hugo for my personal site... and I suffer. If I just disregard strange formats etc. there are just a things that Hugo doesnt do, as a design decision, like start a binary when you are rebuilding the site. Which is to me extremely strange "security" decison that is just making updating site (like putting `software --help` on to a webpage) more annoying and workarounded by bunch of scripts instead relying on hugo only.

I think that over the time all the software starts creeping with bunch of features that most people dont want or need and strange decisions from the past hunts it.

Few people accept it but most people love reinventing the wheel.

I really hate that expression. Nobody EVER reinvents the wheel. We all know about wheels, and how they work and why they are useful. But we invent new types of wheels all the time. Maybe we put teeth on it and invent the cog. Maybe we invent the tire to improve traction and reduce bumps. Maybe we put spokes between the hub and the rim and to reduce rotating weight. These are all new and useful inventions despite still being wheels.

Sometimes an existing wheel design is perfect for our use case, or at least good enough that creating a new one isn't worth the effort. Sometimes a tweaked wheel is what we need. Sometimes a radical departure from established wheel design is required to make a new machine possible.

Build vs buy is an eternal question, and it's reasonable to examine and reexamine it for any project. But "reinvent the wheel" is just hyperbole.

People absolutely build solutions to problems that are already solved, often worse than the existing solutions.

Your complaint about the phrase seems to be that it doesn't well describe the process of adjusting another design to work better for a specific use-case. But that's not what the phrase is trying to describe, it's literally describing the problem where people reinvent that which absolutely does not need to be reinvented.

> Nobody EVER reinvents the wheel. We all know about wheels, and how they work and why they are useful.

You greatly overestimate people's knowledge of wheels.

An average engineer in any field probably has developed enough systems thinking to reason a great deal about the variation of wheels across applications.

It's the axle that was truly revolutionary.

I've seen many applications re-created feature for feature just to do it in a different language. If it doesn't improve the wheel or solve a new problem, that's reinventing the wheel.

My point was not that nobody ever does unnecessary work. That does happen, obviously, and we can certainly object to it.

In the case of rewriting an application feature for feature in another language, that may or may not be a good idea. Maybe we're switching from an interpreted language to a compiled language because we need better performance. Maybe we're replacing a huge pile of impenetrable bash with a well organized Python app. Maybe we're taking advantage of a library that's only available in another language. Maybe there's a good reason for the rewrite but it's so much work that it's still not worth it. Maybe it's just a engineers wanting to use the latest toys because it'll look good on their resumes.

It's totally legit to question the need for a given project. But the phrase "reinvent the wheel" is just scornful insult. It adds nothing to the discussion, and in fact makes it harder to discuss trade-offs.

Is there a better word than “shitty” - slightly more positive, more positive, something not totally crap, more fun and whole lot of learning.

The thing is, I really really like the word "shitty" here.

Because, let's face it, most software out there (especially most software built by individual hobbyists) is ... let's say not great.

I realize that this is a cultural thing. In the US for example they tend to favor soft language more. They would call it "beta", "early access" or "personal project".

As an European I tend to see it more as... well, "shitty".

And that is fine. It is in fact refreshing not to have to filter out the marketing buzzwords like "innovative" or "minimalistic". The title immediately put me in a good disposition towards the author. It feels honest.

As a European [1], I have no idea what you refer when you say "especially most software built by individual hobbyists is [...] not great". Have you seen the websites built professionally by consulting agencies? Public administrations? Sure a hobbyist just starting out doesn't write great code, but hobbyists care a lot more about quality and will improve a lot faster than those just slinging sausages or those making the bucks.

When I peek into your average open source project (made, mostly, by hobbyists) I see good to great quality. Whenever any private code gets leaked, or on your average company, or just inspecting websites, you see the horrors. Just my experience here :)

[1] Spaniard, I do believe our websites might be on the bottom of quality of Europe though, so my experience might be different from other "European"s :)

When I said _individual_ I meant "projects developed by one person only".

In general I agree with you, open source tends to have better quality than proprietary software.

I will point out however that Open Source sees a lot of contribution from non-hobbyists too. That's my case, in fact: my salary for the last 10 years came from working on open source projects.

I have a "shitty" static site generator. It's "shitty" because almost no thought or design went into it. It started as 20-30 lines of code and grew into ~1000. It's got hacks, bad decisions, it redoes work like reading files more than ones or reading the files it already wrote.

But, it works, it has rare features I need, features that AFAIK most of the famous ones do not and could not be easily added (well, at least when I checked)

That said, while it started 100% hand written I did add stuff over the years. For example I had my own templating system but switched to 'handlebars' at some point. I don't actually remember if there was a good reason or not. A long time ago I wanted to add RSS/Atom support. Sure that should be easy to do on my own (i've done it in the past) but whatever library I looked into was easy without too many dependencies. The latest, I added jsdom to process a few more automated transformations.

The word "shitty" really does describe the code and I'm embarrassed to show it really but it works and I have better things to do than refactor to make it clean.

"bespoke" is probably the one you'd want to use.

"Bespoke" implies made to order for someone else.

I would just use "personal".

"Shitty software" was an old saying even back in 1995:


"We Make Shitty Software... With Bugs!"

Ha Ha!



Purpose-driven? Custom? Personal(ized)? Custom-made?


positively shitty.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact