Fast forward a few years - Experience, good programming habits, and the gratuitous use of assertions. Now I spend a negligible amount of time debugging, and it never ceases to amaze me how frequently things just work the first time.
Edit: I guess I want to say that there is one, and only one, bottom line: the relentless, ruthless pursuit of quality. It takes time to develop the good habits and watch for the pitfalls, but once you're there you develop your software products in a quarter as much time, with one tenth the stress, and everyone on your team feels proud of themselves and each other. Then with your free time you can focus on what's really important - your business and your life
I spend more time creating then debugging when I write the code from scratch, but I still say I spend probably at most 70% coding if it is a new project. The rest is in failing tests and finding those weird screwed up errors that take multiple days of a debugging session to find. For existing code I take over, it is probably usually more like a max of 50-60% coding depending on the original authors skill.
Furthermore, a lot of the extra time comes not from the initial coding exercise, but the diligence and follow-up required. ie. Cool, I've implemented a feature, but did I go through carefully and ensure that I've removed all my console.logs and commented out experiments? Did I leave dead code anywhere? Did I make any changes that require renaming or refactoring of other parts of the codebase? I never submit a PR these days without carefully going through my own git diff and double-checking myself. I almost always catch something when I do. These things take time.
Not that it happens much.
My reason for the truncation is that the thinking isn't just about what tests to write to validate the semantics you want, but what semantics do you even WANT? Happy path may take an hour to figure out, but getting to a point (for a reasonably complex system) where I feel confident that I've enumerated the "perimiter" of the mental model such that there are fewer surprises, gotchas, odd edge cases, usually takes significantly more contemplation of the problem space than modern big-co "DELIVER FEATURES NOW NOW NOW" would often like, certainly more time than spent actually implementing, by and large.
(you may sense some bitterness, it is largely because a respected mentor of mine made significant effort to stress to me that if I'm leaning on a debugger, or having to printf a lot, I probably don't UNDERSTAND what's going and and can fall prey to far more severe logical issues; and despite my observation that I became a far more robust engineer utilizing this strategy, it's often hard to incentivise balancing this against simply shipping, especially given the difficulty of empirically justifying "I need a day to think really hard about this problem to make sure it's not subtlety wrong" against the rebuttal of "what's the ROI")
1/4 component test and early system test
1/4 system test, all components in hand.
This differs from conventional scheduling
in several important ways:
2. The half of the schedule devoted to debugging of completed code is much larger than normal.
3. The part that is easy to estimate, i.e., coding, is given only one-sixth of the schedule.
"The data on the percentage of time spent in error removal has varied over the years, but the usual figures are 20-20-20-40. That is, 20 percent for requirements, 20 percent for design, 20 percent for coding (intuition suggests to most programmers that here is where the time is spent, but intuition is very wrong), and 40 percent for error removal."
For me debugging usually means "figuring out why we had an outage." This means looking at: 1) application logs, 2) server metrics, and 3) source code associated with the failed applications.
I recently had to ssh and run ngrep on 8(!) servers to see how groups of messages passed around and then look at timestamps to correlate what happened. It was very tedious. This could have been saved by better debug logging; we could have switched that on for 2 minutes, run the tests, and and the looked at everything in Logstash.
When this happens, I end up spending a ton of time tracking down errors. On a bad week, this can be half my time.
So to me, debugging is as much thinking about how you'll have to solve errors in the future and planning for it as it is writing unit tests and tweaking code.
I am working on an existing distributed system with many moving pieces, which is rather prone to outages. This is fintech, so outages mean a lot of money for a lot of people. So my job involves overhauling the existing system, as I upgrade bits of the system slowly: A full rewrite at once would be madness, but I suspect nothing in the current system will remain in two years.
The biggest time sinks are stress testing any of the newer pieces that I try to bring in, followed by incident remediation. There's an incident that requires a human hand to fix it every couple of weeks or so, and I end up spending about three days each time writing better error handling code, adding observability and alerts, and if something is really recurring, writing automation to make the problem fix itself.
This is a fact of life in any distributed system that was written fast: People are start happy because it works most of the time, but as you want the 4th and 5th nine, you need people hardening the system. This is something that is very hard to do as you build anyway: While unit tests are good, there are entire layers of behavior nobody will be able to spec properly by looking one piece at a time, so stress testing, gamedays and such are the only ways to make sure not that the system works to a spec, but that we can even come up with a spec that behaves the way we want in practice.
There's value in evaluating scenarios in your head, but I've also seen what happens when mathematicians use that as their only weapon in a distributed system: Months are spent making sure the system is correct, but then lots of effort is spent on scenarios that are more theoretical than practical, and other scenarios are ignored, even though they occur a lot in practice.
In this respect, it's not very different from entrepreneurship: Getting an MVP out the door and doing things by hand instead of using automation is going to beat spending a lot of time making a product without having any idea of what the market really wants.
I've sunk a lot of time into trying to change this. Among other changes, I've:
- Improved crash dump collection, to spend less time reproducing bugs and be more thorough in addressing them.
- Improved code debuggability - for example, writing scripts to inject call stack information into actionscript and java via disassembly, which I can then display on assert, especially on platforms where I have unreliable or incomplete debuggers.
- Learn and use defensive coding techniques to make bugs fewer in number, shallow, and caught more quickly and with more context.
- Write thorough tests to catch said bugs before I even run my main application, and more edge cases to input
- Learned more tools to catch bugs I might not even know exist - valgrind, address sanitizer, static analyzers, fuzz testers, ...
I spend much less time debugging my own code now. If I'm lucky, I'll work on projects where I don't have to debug my coworker's code all that much either. That still leaves debugging 3rd party libraries and tools - which I may lack the source to entirely - that I suddenly have more free time to really properly investigate and get to the root cause of.
Only thing I enjoy now are exercise and music.
Also, see if you can find some satisfaction in expanding your programming skills through reading and learning. Not sure your experience level here, but I would recommend that to anyone -- it has made a huge difference to me personally.
I intend to spend no free time on my career outside of work. There are far too many other things in life I would prefer to work on and experience, hence why I wonder if this is the right field for me.
Sorry if I sound so negative, it's just how I've felt since first starting out in my field.
Nope, but my .vimrc is a work of art.
Analysis Programming Debugging Overhead
------------ -------- ----------- --------- --------
My Own Stuff 30% 60% 10% 0%
Others' Code 50% 10% 30% 10%
Enterprise 10% 10% 10% 70%
The overhead consists of endless meetings that never reach consensus, but arguably this could be filed under analysis
But I have the luxury to work in a result-oriented environment with people too experienced to fall for "agile". So I can spend half of my week in a cafe with pen & paper as long as the project is done by Friday night.
When I started, the bulk of my time was debugging my own code. I am gifted with the ability to write vast swathes of code in a short amount of time and when I was younger, it was vast swathes of shit code.
A little later into my career years 3.5-5, I spent more time coding and less time debugging. I designed my code better, used better patterns, and generally was just an all around better engineer.
I've come into the third stage of my career now where I spend a good portion of my time debugging junior engineers' code in a complex system I work on. In particular, my focus is usually in reliability and performance. I don't tend to debug the junior engineer one-off issues but rather the subtle regressions introduced by seemingly harmless changes.
In this third stage, I still write a lot of code, but much more of it is investigative and refinement over existing ideas with occasional injection of something wholly new.
- One project is in active development, and I probably spend about 70-80% of keyboard time coding with 20% debugging.
- A separate project is in maintenance mode, and obviously most of my time on it is debugging as bugs come in. So probably opposite, 70-80% debugging there.
- Sometimes feature extension requests come in, in which case it's probably closer to 50/50 on that project.
So, they're all bugs, and in a sense, all coding is debugging. New-feature, regression, existing (from previous release), and escalation bugs. They're all basically the same thing: Identify the deficit, write a fix (includes what you might have meant by "debugging"), write tests to cover the changed behavior, check it in, deploy/release.
15% stack overflow
25% email archives
15% commit logs
10% navigating code, spelunking
5% writing tests to confirm config/state/feature availability
Maintaining - if this means soft feature creep, then 10%
Maintaining - if this means bug fixes and other things, put in implementation, 5%
I genuinely like this
New functionality 20%, including sitting with users for new functionality requests, seeing their workflow
Other stuff. Like filling in timesheets, which assume hours can accurately be attributed to discrete tasks for discrete people any and all of the time.*
* Just set goals for staff. Do staff achieve their goals? If so, why timesheet? Or just timesheet roughly, my hour-by-hour 7 day per week sheet is a pain.
Most things are several small (<10 line) functions.
The most complex thing I've built is a pre-qualification form. Thank god for moment.js, because I never thought it would be so difficult to calculate if someone is between the ages of 40 and 82 (or will be 40 by October 15th of next year).
That said, about the question asked. I can attest to the fact that in general thinking and writing good tests greatly reduces time spent debugging. Also, adding judicious and useful logging for the critical / edge cases helps a whole lot.
Bonus points if the project started as one thing and pivoted to something completely different midway through development, and about 50% of the code is completely unused.
There are certainly practices that reduce the amount of debugging, but it's all relative. Personally, the question for me is nearer something like;
> When is the right time to let go of my current approach?
I believe that more time spent debugging, the worse the code you're debugging is. Now if you're spending most of your time debugging your own code, then likely you're a novice who hasn't learned the many ways to write quality code that "just works".
If, on the other hand, you inherited a codebase from someone who did not follow the tenet of "develop your code as if the next maintainer is an axe murderer who knows where you live", then spending a great deal of time debugging is understandable and likely unavoidable.
Personally, during the time that I get paid for programming, most of my time is spent writing tests and developing features.
On the side, however, I have a project that I inherited from someone who clearly never intended to have another person look at the code, and most of my time is spent spelunking and debugging (and slowly replacing every last line).
So I'm trying to modernize it by building a separate app that can interoperate with the 4 different schemas and do all the same things that the old app did. It's an interesting exercise in replacing legacy code piece by piece while still using it (all the leagues would not function if the site didn't work, and there's basically 3 weeks out of the year when the leagues aren't playing).
Professionally, on the other hand, I work at a startup where I've more or less had my hands in the code from day one.
But I am not sure how can I improve myself. I am not sure whether anyone faced this but I feel like the starters really feel the same way.
All fun lies in debugging I sometime love it. I find funny bugs in my code. But it is so time consuming and I really want to reduce that. Not sure how. Any help will be thankful.
Honestly, I'm probably better at it than building new features anyway.
Usually can get through a chunk of code no problems but when that 1 inevitable bug arises it will take up a lot of time through trial and error, stack overflow and just generally googling to find a solution.
That said, I'd say only about one-third of my time is spent on code. I spend significantly more of my time doing operations work and having meetings.
I mean period. Terminating period.