Googling is great, but it only helps when you know what you're looking for. Telling expert humans what you're trying to achieve is far more useful.
This is critical. All sorts of "dumb questions" turn out to have been created by faulty assumptions or bad solutions to a larger problem. Often it's like traveling to the top of a mountain to boil water instead of fixing the broken stove. When asking for help, users should always ask the big picture question ("I need help setting up a server to do X, I'm getting error message Y when doing Z" rather than just the specific problem "I'm getting error message Y when doing Z").
<glyph> For example - if you came in here asking "how do I use a jackhammer" we might ask "why do you need to use a jackhammer"
<glyph> If the answer to the latter question is "to knock my grandmother's head off to let out the evil spirits that gave her cancer", then maybe the problem is actually unrelated to jackhammers
"As a user, I should be able to mark my favorite posts, so that I can find them again." It may turn out that users don't need to find old posts, or would prefer to search instead of looking through an old list.
Tiny little problem. You may not be able to.
Having worked in commercial, academic and government contexts, I can tell you that real life projects will always, always have some kind of hurdle to prevent you from divulging "what you're trying to do."
You might be working from a spec that only tells you what your superiors what you to know, there might be some legal or "market positioning" reason that requires you to preserve secrecy, or there might be department politics that will cause fur to fly if somebody outside your team finds out what you are really doing...
I sure hope my current stab at self-employment works out; I'd really had to go back...
As frustrating as these little problems can be, really digging in and taking the time to understand it is what experience is all about.
Can't get smaller than that!
My first guess was one character, since you can fix a bug by changing one character. But in the 'P' vs 'p' case, minimum size could be larger - fixing the calling code is fine, but is there a change that can be made to stop this bug appearing in other calling code? Eg, simplify the interface so that one of the 'p's is protected, so that the wrong Percent can't be set by accident.
Sounds like he / you acquired some essential domain knowledge in those eight hours. Your code is more efficient, and you understand the problem space. That is what programming is about, not how many lines of code you write.
(When working on a large existing code-base, my goal is to remove lines of code from the project. Writing code means you are writing bugs. Removing code means you are removing bugs :)
I don't think I've ever heard anyone speak in support of TDD say that the code is fine if the tests pass. The canonical process of TDD explicitly calls for refactoring after the tests pass. That step is crucial and lasts as long as it takes for the programmer to be satisfied with the code (e.g. 8 hours).
I also found no evidence in the article that there was TDD involved. Perhaps I've misunderstood either the article or your comment. If that is the case I'd gladly retract my criticism.
When it removed elements, it was matching both the represented value(an integer) and the node(an object reference). Using && meant that it removed too greedily. Because updates always went "remove then add," this error went undetected until I hit the case where two or more objects shared an identical reprval. Then it removed all of the shared objects, leaving only one behind when I went to query the partition. After going through "Does my query function work right, does my list implementation work right, " I finally narrowed it down to the add/delete functions.
Now, it would have gone from a "mysterious" error to an obvious one had I done an academic-style step-by-step visualization applet of the entire structure's processes before integration, but I had confidence when the code was first written that because this structure was being used with extremely high frequency, the only thing I had to write an explicit test for off the bat was off-by-one ranges causing edge-case "near misses"; anything else would make itself known at runtime after integration.
Time it would have taken for a visualization applet: ~3-6 hours?
Time it took to solve the bug: ~3 hours
It's placing a bet - and the bet is that the bugs are limited to a specific segment of code that can be narrowed down easily; if the problem were architectural in nature I'd be in far deeper shit because that would make the same class of bug appear all over, with any number of different symptoms.
Comments like this:
> wow, you are an idiot. i feel bad for you. you work on image processing for a living?
(since each channel of RGB is a weighted combination of all three YUV, each of RGB must be somewhat lower resolution, and that can't be recovered, so when converted to gray, that lower resolution must remain . . . so it seems it would be a deficiency . . . )
This has nothing to do with luminance range though. Even if the output samples only had 4-bit precision, it wouldn't make black into gray.
Are you saying definitively that chrominance is not encoded at lower spatial resolution?
I also said it might be another cause of difference, not the cause.
Anyone might want to look at http://en.wikipedia.org/wiki/4:2:2 before simply assuming I am talking nonsense.
Even On2, a competing encoder company, includes instructions with its encoder on how to set up mplayer (which uses ffmpeg) for decoding input videos.