"He who works with his hands is a laborer. He who works with his hands and his head is a craftsman. He who works with his hands and his head and his heart is an artist."
Its that last part that is hard to bridge for a lot of engineers. The thing that did it for me was sitting through a LOT of formal usability studies. Seeing folks actually use an application you made, where they make mistakes, get stuck, will change your relationship with your code. Even if you work purely on the back end building APIs, you have customers (other developers in this case).
Focus on the what the code does, and why someone would use it, care about your customer... the how only matters to your fellow engineer, and if it is that bad they are going to give you grief for it and you can fix it.
Of course, this backfired spectacularly a few years later at a different company, when a customer stopped paying a lucrative support contract because the software always "just works". I have decidedly mixed feelings about that!
You can't always fix it. If it were that simple, everyone would be using Python 3 already.
Python 3 is the polar opposite to Linus' meltdown on not breaking user space see: https://lkml.org/lkml/2012/12/23/75
Sadly I like Python 3 and wish it was more popular.
I share your sentiment about having 1 version of Python but it's not the users fault. It's the core dev team's and Guido's.
I used to be like this for the first few years of programming. And I knew a guy at my last job who was exactly like this. The thing that helped me, when I felt the indecision paralysis come on, is to just do something and accept that it may be wrong. You often don't know the best decision in the first place because you lack experience. Doing it the right way by accident, or making the mistake of picking the wrong way helps get you that experience. Be deliberate about always doing something, and over time the paralysis will get less and less as you gain more experience.
1. If I'm stuck and can't decide which path to take, it's usually because I don't have enough data to tell which path is better. The fastest way to get that data is to pick one path arbitrarily and start walking down it. Pretty soon, it will be become obvious if it's the wrong path.
2. One of my favorite things about code is that it's infinitely malleable. Especially thanks to the magic of Git, I can undo any change. Nothing is permanent which means no decision is carved in stone. Deciding isn't deciding what to do forever, it's deciding what to do for now.
3. Sometimes I get stuck trying to do a depth-first traversal of the fractal tree of all possible implications of the program. Going depth-first down every single edge case and rathole is not an efficient way to get something up and running.
You really want to go breadth-first or best-first instead. To do that, you need to leave markers in your code of which branches remain to be explored. Those are TODO comments. So instead of falling into a rathole and not making progress on the thing that's at the forefront of my mind right now, I just leave a TODO, "Figure out what to do here if the ___ doesn't ___."
TODOs are great because they help my current velocity by not getting sidetracked and my future velocity by giving me something to pick up on when I finish something else. I pretty frequently search my code for TODO and grab one that sounds fun.
Based on a true story.
Not so much: a person not being a good fit for that company.
More: that company not being a good fit for long term viability.
It won't' be well engineered, it will be buggy. But the project will be completed and released. If it makes some sales and looks like it has some traction, then I go back and visit the many (many) //todo - clean this up and refactor everything out.
Usually after a while of adding more features, hacking out nasty design decisions the best way on how to do it comes to the fore naturally.
If the prototype is quickly made, it could even count as part of that upfront design process.
"Its clay, not stone".
Yes. Code is like dough.
It's exactly like randomly initializing a neural net before starting with your training data. The results from the random weights will be garbage but that doesn't mean they're not useful.
Well said. IMHO, this is the core of the issue and the thing that has helped me both move on and improve. You have to be ok writing crappy code because, in lieue of experience or someone to (on the spot) help you, its the only way forward. As long as you're professional about learning and refactoring, the best you can do is keep moving, keep learning, keep refactoring. Then eventually, help move others through that gauntlet more quickly.
Make it work, and don't succumb to the temptation to worry about what other devs think of your code. The subconscious dialogue that results from that is paralyzing. You can rewrite code n number of times and always believe that n+1 will be better (and it may be).
But, the funny thing is that the eventuality that would objectively make the code better is generally some potential future requirement that is imagined (whether functional or performance). This dictates infintely flexible design and the obsesive hunt for looser coupling. In practice, however, the eventuality seldom comes to pass. This is where the YAGNI principle has some value. Again, make it work.
Well-stated -- and (at the risk of over-complimenting), very close to the absolute zen of self-driven learning.
The ironic part is that I face this exact problem with the thing I am formally trained in -- writing. The blank page paralyzes me in a way blank vim never does.
It's absolutely intrinsic to the nature of the process. And the best way to virtually guarantee that you won't be ever get to find out whether your idea was good or bad, let alone truly awesome, is... insisting on perfection before it ever gets out the gate.
This works because heating up the metal makes it easier for it to arrange itself into a more optimal configuration and then you hammer it into shape and cool it down again to preserve the awesomeness you already reached. Then you repeat that process a thousand times until you have a katana that can cut through steel.
So, don't worry about just implementing a feature very roughly and breaking apart old structures you already put into place, you're just "heating up the metal". To see how everything should work and what it should behave like (i.e. "hammering it into shape"). As long as you then do the clean up/refactoring and put everything in the right place and such, i.e. "cooling it down again", you'll end up with awesome code that's robust and expresses what should be done very well.
So, in a way, imagine you're a smith at the forge, forging your code like a Hatori Hanzo sword, which is also a nice methapor, I think (with the difference, that your code's purpose is hopefully not cutting peoples heads of!).:)
Sometimes the best is overkill, and a good enough is significantly cheaper (shorter, easier, available already).
Sometimes the best is not good enough, and you have to rethink your strategy.
You Ain't Gonna Need It. Let that be your mantra. And always promise yourself that you'll refactor (and actually do it!)
I've never understood something until it stood between me and building what I needed to build.
This approach has been great professionally, but lately I'm becoming more interested in theory intensive fields and struggling to find resources that teach from the perspective of someone who wants to build something that requires the knowledge involved, rather than someone who just wants that knowledge.
For example, I'm working on plotting parking tickets in Chicago. Tons of FOIA, lots of data analysis, sanitizing, python, GIS, government induced hair pullings, gnu cli tools, civic meetups, web scraping, etc. Finished this last night using 730k parking tickets after a LOT of cleanup with python. There's no way in hell I would've been able to do this a year ago without stepping into something so deep.
 https://plot.ly/~red-bin/6.embed (just
Yep, I've been there a few times. Meant to go there today, actually. Cool crowd, but I have some general criticisms of the way they do things. I'm almost to the point of thinking that their work is an overall detriment to the city, since there's a strong tendency towards group thought rather than individual exploration. Their workshops also get nothing done since the barrier for entry is so low that they're so busy answering pretty basic questions.
They actually brought in a hired gun political data scientist who does research on his employer's opposition to help paint a bad picture of opposition. It was seriously presented in a positive light and everything! Hell, when I asked him his thoughts on FOIA, he basically completely shrugged me off and called it useless, despite things like that graph, which was created through FOIA data.
Chicago's head data guy is there almost every time, working on workshops, which is pretty neat. Incredibly interesting guy. But, I have a hard time finding legitimacy in his presence when Chicago's only analysis of parking ticket data (something ChiHackNight would love to work with) is this . Received through FOIA after specifically asking for any analysis they've done on parking tickets... You'd think the head of data would have addressed that.
It all just feels very Feel Good.
Once I get this project to a certain point, I'll probably head back there and show it off and ask if someone else wants to take it over. Eventually, starting one of my own group would be nice, but I'd like to have something under my belt beforehand to help with legitimacy.
I think I need to do a better job of reminding myself that it's not pointless as long as I learn something.
Aren't I, as an engineer, supposed to be constantly entranced by the bleeding edge of technology? Whether it's the latest lisp Dialect, the latest functional programming language, the latest NoSQL database, or whatever the latest is?
Granted the nature of my work is such that I wind up using tons of new technologies every new year, but I don't go out and experiment with them with no particular use in mind.
I'm also a very practical learner. It's hard for me to pick up anything (whether it's a language, technique or a theory) without applying it. So to pick up new technologies I constantly cycle them into new projects.
That said, I get what you're saying. Often it's clear the new fad doesn't provide much utility. In those cases it's like waiting for a cold to be over.
Lately I've been curious about how I could go about building a web framework in Elixir. I started off buy spending this past Sunday digging into how to open sockets/ports, and send requests from my browser, and reply to them from the Elixir command line REPL.
1. Applying your knowledge forces you to confront the gaps or uncertainties in your understanding that you would normally overlook.
2. Allows you to interleave different the topic with related subjects, which gives you a broader or more fundamental understanding of the topic.
3. Attaches an immediate utility value to your study regime, which serves as a better motivator and means to identify when to study hard and go deep, and when to gloss over.
I got several C's in calculus because I was too busy messing with slightly different calculus instead of doing the problem sets for the tests.
I regret nothing, but I wish I could learn the same way most people seem to be able to sometimes.
FWIW all this math stuff a) makes way more sense with some FP knowledge, and b) is not all that hard to understand in the context of a project.
Finally, for extra immersion, start reading Data Tau (the data science version of HN). Just pick out stuff that's related to what you're working on, and then try to understand a whole article or something. That's what I did with HN and I grew exponentially as a programmer because of it.
It's like learning how to physically manipulate a guitar and play notes, but saying "I don't know what to play." That happens a lot too, but musicians have the repetoire of other composers to play if they do not want to write their own music (many don't). With programming, recoding something is called reinventing the wheel and that's frowned upon.
Ok, sometimes the analogies are forced, but I hope you get it! It's kind of interesting though, because in order to progress we have to build something new, or build on top of what we have, so the technology is forced to evolve. Music can repeat over and over and people don't get tired of it and you could argue pop music is working against the evolution of music.
For anyone who hasn't seen the four chord song... https://www.youtube.com/watch?v=oOlDewpCfZQ Wait, what we were originally talking about now?
The usual progression is something like:
1. Build your first major project as senior engineer. You make something that works, but is brittle and soon becomes difficult to work with.
2. Build your second major project. This time, you adhere to ALL established best practices. The end product is difficult to configure because you've gone too far in flexibility, and a major pain to maintain because the codebase is 3x larger than it needed to be, and filled to the brim with boilerplate code. You also resist changes that threaten the precious architecture. Everything's perfectly isolated and testable, though!
3. Build your third major project with the minimum needed to get it to work for your client, keeping a few of the best practices to keep it mostly flexible enough for changing requirements. It will eventually grow in unhealthy ways as requirements shift beyond fundamental architectural assumptions, but you already knew this going in.
4. Make the mistake of doing a "2.0" rewrite of your major project. Vow never to do it again.
Here's the code in case you're interested, it's for simulating superconducting circuits:
1. Make it work.
2. Make it not break.
3. Make it fast.
That's it really. Until the performance of something is blocking what you need to do somewhere else leave it the hell alone.
2. Make it right.
Note: The debates over LISP, OOP, Cleanroom, Smalltalk, and so on are all especially interesting.
Unfortunately, his current employer chose a different order: 3, 1, 2. Sometimes they drop 1, 2, or 3 separately or in groups at random. Gotten much better at 2 and 3, though.
2. Ship it before your competitor does.
3. Make it manageable.
4. Make it fast.
Otherwise I find that trying to make something fast before making it manageable usually ends up in crazy hacks that are hard to undo later.
1) Make it work
2) Make it work better
3) Make it work cheap
The world doesn't need more crummy unusable software.
1. Make it
2. Make it work
3. Make it fast
To be a musician you must practice, but you must also perform. These are two different modes. When practicing, the goal is to improve a specific technical skill. You don't concern yourself with expression at all. You will repeat the same passages over and over and many exercises sound really terrible. When you perform, you put your trust in your training and you access an intuitive part of yourself to add feeling and nuance to your playing.
In design thinking, it is common to split activities into generation and synthesis. When generating, you recklessly explore possiblities, tossing them out, following blind alleys, making weird connections. Synthesis is the process of evaluating the generated ideas against some framework, such as goals or known models, finding patterns, simplifying and making coherent.
We need all of it: practice, performance, generation and synthesis. Experience brings a sense of when each mode is appropriate or needed. Being explicit about which mode you are working in removes the stress from attempting to practice while you are generating or similar dissonant mixes.
More concretely, a user-centered process has brought a lot of this mental framework to development for me. Step one is never setting up a test harness, it's understanding the user and their goals, even if the user is yourself.
Everything else flows much more easily from this starting point. You can base your decisions on how they will affect the user instead of on "best" practices that may not have the best outcome in your situation. Things like precision, speed, fidelity, automation, complexity, stability, even whether or not to build something--all involve trade offs that affect the user in different ways. You can experiment more freely knowing that you will evaluate your experiments in terms of their utility for the user.
The art is in applying this left-brain-right-brain dance to trace a path through all the complexity on behalf of the user.
1) Being careful of every change and addition I make. Trying to figure out and predict the effects, scalability, and maintainability of the code I produce. I find that most engineering positions are looking for this, as most employers already have an existing production code base and would not like wild things like crashes and service disruptions to happen to their audience.
2) Occasionally, I get to start on something completely new. Not new as in just a blank page, but new as in blank project, or even blank idea. As exciting as this sounds, over 90% of the time, the idea will die as there is some undiscovered flaw or expectation almost always external to an exceptionally well thought out implementation. The only thing clear about this is the urgency of the deadline, and the failure associated with missing it (missed management expectations, no demo, no presentation... etc). Many know this as building a Minimum Viable Product. In this case, all rules go out the window. Choose expedient tools, create copious crap code and internal design, as long as it works OKish in the end. Having it violently done tomorrow as opposed to 3 days later, or a week later, or a month later, is key. If you're lucky and it limps along (not canned as uninteresting), just change hats and become the step (1) engineer.
Sounds like you desire some more experience with (2), but both types of people are needed. You, yourself, will become more valuable if you can put on both hats, but I wouldn't say that one hat is absolutely better than the other.
I'm good at building stuff, but I'm not a great engineer (I'm ok, but others on my team are way better).
Not to say I don't have my strengths. I'm good at figuring out what to build and the general architecture of how to get it done quickly and working well.
But when it really comes time to scale and scale big, you don't want me coding. You want me looking at the next features etc. etc.
If you love getting into the nitty-gritty of high quality beautifully considered easy to work with code, why do you feel you should focus on 'building stuff'?
Personally, I think focusing on your strength (not to completely ignore building stuff) can be a huge benefit.
Sure, build something, anything, to show it can be done. THEN MAKE SURE ITS ACTUALLY STABLE.
And then realize that building stuff, and understanding how to build it well, both matter.
i just had a similar conversation, but it dealt with music instead of code. this person felt they couldn't improvise/ create.
the problem isn't with their abilities, it's that they were taught to follow instructions - like the notes on sheet music. jazz musicians play off what they hear instead. however, they first internalized the rules and then forgot them.
you've already got the rules down - so the next step is to "forget" them.
a time constraint like going to a hackathon is a good place to start. you're pushing yourself to create in a short amount of time.
here, you'll focus on shipping...drowning out the critical voice in your head.
it's much like a muscle. the more you build and struggle through, the stronger you'll get.
To quote Friedrich Bauer:
"Software engineering is the part of computer science which is too difficult for the computer scientist."
I've eventually come around to flipping this idea on its head, though. If I design not just some kind of product or solution, but a whole process, starting basically with how I want to run my life and then drilling into specific details from there, the technical knowledge stands on more even ground with other forms of knowledge, and with seeing life itself in a more precious sense.
With that mindset, good scheduling of every day as an end in itself grows vastly more important, and abstraction-for-its-own-sake falls away: All programming problems start by assuming they are solved first with "code that looks like breadboard wiring"  and then working up the abstraction ladder from there. Automating the technical parts of the solution won't guarantee that it's right in any other way, but it will ease the pain of changing the specification. Acknowledging that the problem is messy, that breadboard coding is messy, and that I won't know how to solve everything immediately and cannot depend on a silver bullet, all constitute crucial first steps.
Now I always look for really basic groundwork to be laid out early on - typically, transforming the breadboard code into something that uses a new data structure, or generating the code flow from a stack or a list or a tree - and that no shortcut is possible without compromising the ability of a potential future abstraction - that it just takes a lot of layers to get where I want to go. Breadboard code is assumed to be ideal until demonstrated otherwise, while "x in y lines" hype is to be avoided under the assumption that the solution is brittle and over-modeled towards the demo code. I cannot assume that my valuable production code will need x or work in y lines. I don't abstain from adding dependencies, but I will preference towards copy-paste-own when I find a reason to reuse code.
(German: Löhne der Arbeit, Zinsen der Zeit, Miete des Landes, Profit des Verstandes)
I realize that this sort of anticapitalist nonsense is currently fashionable in university, but the irony of it being spouted on ycombinator (which might have something to do with investors, after all) is really quite amusing.
How is that qualitatively different from a medieval lord who chooses to permit peasants to work his fields for their survival, in exchange for growing wealthy off the fruits of their labor?
The rich graciously permitting the rest of us to share a small portion of their wealth, so that we can work to make them more wealth while incidentally keeping a bit for ourselves, is the basis for much of our society; but that doesn't mean there's no other way to do it, nor does it make them inherently virtuous.
(I'm not saying they're inherently monstrous, either. Many capitalists may be well-meaning and try to use their money for good, and that is laudable. But don't pretend that the mere fact of having and investing money makes one a hero.)
> I realize that this sort of anticapitalist nonsense is currently fashionable in university
It's been fashionable in every place and time where abusive capitalists have existed.
> but the irony of it being spouted on ycombinator (which might have something to do with investors, after all) is really quite amusing.
The header up there at the top of the page says "Hacker News". It's fashionable right now to imagine that every hacker yearns for a West Coast startup and loads of venture capital, but it's not actually true.
The societies that have gotten rid of the capitalists haven't made life better for the rest.
This is probably the best description of what we see of him that I have ever seen. Combine that with his verbal judo and one should not be shocked that he was successful.
But I don't think his legacy is one of resting.