Hacker News new | past | comments | ask | show | jobs | submit login

How do you make sure you aren't hitting a local optimum of sorts where you implement a solution that has terrible side effects you simply aren't aware of (yet)? This seems to happen when you 'learn along the way' rather than start with a deeper learning upfront?



By deliberately selecting your objectives and evaluating possible solutions based on those objectives. If the objectives are unclear when you start (e.g. your objective is achieve a certain level of perfomance but you don't know enough to know what you can adjust, or how you can measure, or what delivers the most gain for least effort), I just get started and revisit it once I have something basic going.

Often, a basic, naive solution will give you the understanding and domain knowledge you need to be able to read what others have said about the same problem.

It also allows you to more easily dive into the solution as implemented by popular open source projects. Even if they don't write about it, you can often learn a lot from the code -- even more if the rationale is written in comments in the code.

For example, let's say you're contemplating how to implement an audit log for your product. You've never done this before, but you're experienced enough to know that you might hit unexpected hurdles (such as features that are difficult to implement) well into development, or maybe after deployment. Something you might do is research commercial open source products which have an audit log, and read their code to understand how they did it and perhaps why. Another way you can figure out the rationale is by looking at their full set of features and asking yourself how might each feature be implemented in an alternative implementation. If alternative implementations don't work well with this feature, this could provide an answer for why they chose this implementation over the other. And then finally, you can ask yourself if you need these features, which brings you answers to the original question of what are your objectives.

Or the short version: there are indirect ways to learn from the experience of others. Looking at a battle-tested implementation of a feature you want to add is one -- and it's perhaps even better (= more enlightening) than reading some blog posts you might find.


I do that partially by focusing explicitly on a single objective like performance or test coverage.

In reality you need to compromise, but by keeping the scope of learning projects artificially small you learn a lot about that one aspect and the trade offs involved.

For example when I wanted to learn more about testing for front end applications in Angular, I wrote all types of different tests for a small sample project. By doing that and reflecting on how I could incorporate all those tests at work, I learned a lot about structuring my code to be more testable, when and how to use mocks, Angulars built-in testing utilities and much more.

By going very narrow it's easier to go deep. You won't learn about all the side effects and interactions with decisions in other areas, but at least you know more options and can try to find a better optimum.


Anything you can come up with is a local optimum if you zoom out enough. If it gets the job done, no need to worry about that. If it doesn't, refactoring it will be a good learning experience.


Yes, when you're leaning something you have to constantly refactor. It's part of the fun. You don't have hard deadlines so you don't have to get it right at the first time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: