Hacker News new | past | comments | ask | show | jobs | submit login
A theory of how developers seek information (utk.edu)
121 points by jermaustin1 7 months ago | hide | past | favorite | 42 comments



When I'm helping someone debug or just compile something it seems like they very often jump past potentially important information before fully comprehending it. I find I'm constantly saying "wait go back what did the output of that command say" or "wait that error message doesn't make any sense based on what we changed last time, why is it that error?" when they try to just go back to editing the code again. I've used this "comprehension-first" debugging strategy (just made that term up) to successfully debug systems that I am less familiar with compared to the person I'm helping. I'm not sure what to make of this.


I don't know how many colleagues I helped with 'git' just by telling them to read what 'git status' is actually telling them.

Some people's mind just goes blank when they see error and it prevents them from solving it themselves. Often times, they are so used to copy-paste and also not being empowered by management and colleagues for some of their approaches diminishes loads of their confidence.


I always have to force myself to remember that users will actively avoid reading any kind of error message you present to them, and just clear it away or close a modal automatically. There's an entire dark art to presenting this information such that it breaks through that conditioned response.

The other thing I'm always kicking myself about is forgetting that users will just start poking at things and reclicking if they don't immediately get some feedback that things are happening.


It's called impatience. When you're spending what seems like all day on finding the damn bug, and you don't want to waste anymore time on trivia. We're all human, we all get impatient.


I think it's more that there's this curve in your experience level where as you start out, you have no idea how to contextualize error messages outside of your code, so you just pick the thing that seems most related to your particular problem and try to Google it. And newbies are very bad at picking the relevant bits, so searching for the errors they are seeing often doesn't help them address the problem they are actually trying to solve. As you gain experience, you are able to pick out the signal from the noise and so going through the errors you seem becomes a valuable exercise.


Not only developers, but I think most people using computers.

It feels like people have a tunnel vision focusing only on the parts of the screen they're accostumed to. The other parts are ignored.

They try something, fail, try the same thing again, fail again, and repeat until they either give up or decide to slow down and explore the corners of the screen (which often leads to them finding the error).

I wonder if this is somehow related to the "magical halo" of technology. First because they try the same thing expecting different results, second because they don't explore the unknown parts of the system (maybe for fear they'll make things worse?).


I remember helping a friend on a uni project we were writing in C. Of course, a C compiler very rarely output an actual error, just a bunch of warnings. My friend saw the warnings, figured "they're not errors, so they're ok to ignore for now" and kept scrolling to find their code flat out didn't work. When I pointed out the warning were actually errors and we went through and fixed them all, it magically started working.

I think debugging is a hard-won skill that comes from fixing your own mistakes and having the patience to trawl though logs for hints.


Yeah, during high school and uni years, whenever I helped someone with C/C++ assignments or projects, the very first thing I taught them was:

1) How to add `-ansi -pedantic -Wall` to their compilation flags.

2) That they absolutely have to do it whenever they're coding C/C++.

3) That they need to read and then clear out all the warnings before continuing to code.

Suddenly, their programs started to work most of the time, instead of crashing or hanging.

I still give this advice whenever I teach people, but with modernized set of flags. That'll be `-Wall -Wextra` and `-std=c++17`, or whichever standard is most current for their situation. Recently I've been teaching people who use MSVC community edition, so I tell them where to bump the warning level in setting - and the IDE is good at catching and annotating typical mistakes too.

Myself, I do all that + run clang-tidy for good measure.

(Another C/C++ thing that beginners need to be told is that, for any given translation unit with compile errors in it, they should always focus on the first couple errors, particularly on the very first one. A compilation error usually confuses the compiler, so anything past the first one or two is usually garbage, and disappears if you fix the real ones.)


There was a slightly frustrating time in my life when I had to compile a single source base with several compilers (MSVC, Clang, GCC, CodeWarrior and ARMCC), whose tolerance for various quirks was rather incompatible.

I do not remember the details anymore, but some code provoked a compiler warning from one of the compilers, while its easiest fix provoked a compiler warning from another of the compilers.

I wanted to keep my -Wall, but it required strange twists of code sometimes.


Knowing why a warning happens and being able to say that a particular warning is safe to suppress in a given instance is a reasonable way to deal with that case. There isn't a lot else you can really do... but you need to know the why part and have a documented explanation of for the suppression.


Love it. The same strategy was used by a colleague with a struggling team member. For that case, it was: show me a diagram. For nearly every question. The struggling dev just wanted to solve the issue so much, they were not stopping to understand the issue. Slow down to go faster.


> Slow down to go faster yes! I had a similar problem with bigger projects. I used to start coding too early before understanding the problem completely.

I have decided to write an essay since, about everything I am about to do. This slows me down and makes me look at every detail that might be crucial in the project execution.

These docs then become some sort of a data lake about the process of building this project, from which later I can derive documentation or slides for presentations from or send people there when they ask me why I did something the way I did.


Same deal. I like to imagine I'm just plain smarter than others because of it, but smart is a pretty reductive term. I think it's more that debugging is an acquired skill, and one that we've practiced more than them.


> I find I'm constantly saying "wait go back what did the output of that command say" or "wait that error message doesn't make any sense based on what we changed last time, why is it that error?"

Along those same lines, I like to use the phrase "sanity check" a lot. It's meant to imply nothing should be wrong yet but we can't rule anything out, so let's check every single step and make sure they match our expectations.

For me it seems to work to get the other person to explain what they're expecting as they go through the steps (forcing them into rubber-duck debugging with me as the duck), and many times has caused them to find something they were just glossing over before without me even having to point it out.


"Comprehension-first" debugging, love it.

I use a phrase when this whole "jumping past messages" is happening, so often in fact that it's become a meme at our company, "You are clicking like a mad woman/man!"


Yeah, "lean back and ask yourself what really happened" is a great debugging method.

But, counterintuitively, it actually might work better with the projects you are less familiar with. Once you are too deep in some rabbit hole, all the details you know may have negative impact on your ability to orient yourself.


We are all victims of confirmation bias. Following hunches and ignoring evidence, it's not just for lazy tv cops, developers can do it too!


I run out of cognitive energy for coding faster than I run out of time. Sometimes the slow and stupid way is optimal because it saves cognitive energy.

I suspect coders "forage" more than this time-based model predicts because it's cognitively easier than deciding where to "navigate" or how to "enrich".


What stackoverflow was invented for.


> Novice developers have considerable difficulty in foraging among different versions of the same code.

In my experience developers generally find it difficult in any singular version of code, even code they fully control. Leaving my last job I did a considerable amount of knowledge transfer (20 hours of dedicated sessions, nearly all the rest of a month informal KT), and something that kept surprising me was how much time I spent sharing debugging techniques that were either exploratory (walking up the call stack, stepping through execution) or were platform-specific pain points but very well understood (async boundaries).

Overall my takeaway was that most people just read docs and existing bug reports, and if they don’t find the answer there they tend to get lost and give up.


Can’t edit so reply-edit to add: I didn’t mean to suggest I’m good at it either! I’m just persistent. To a fault, which is sometimes to my benefit.


Predator and prey, you say?

So, developers are some kind of grazers for the most part, since they do not harm their prey? They could even be seen as pollinators and plant seed spreaders?

So information in this model is the analogy of plant life, that can be harvested? And obviously there is little evolutionary pressure to evade being harvested. On the contrary, as I implied, it could be called advantageous to be harvested.

Now, that is a fun metaphor. Just for a moment consider some kind of information that tries to evade being consumed. And as a next step in this evolutionary process one that feeds on other information and that is not human.

An artificially evolved hunter-gatherer that feeds on information.

Now, that would be some interesting being, wouldn't it?


“A person, known as the predator, seeks information, known as the prey

Academics are such oddballs they make software developers look normcore.


Wouldn’t this mean that editors which have a more “mechanical” means of getting information (like a keyboard shortcut) fare better than ones with a complicated graphical display, as the cognitive load is reduced? In this case, Emacs or Vim might be better than VS Code, despite the learning curve.

I fear a bit of confirmation bias on this one, so correct me if I’m wrong.


We need to tease out several different things here. In my opinion:

- Information density is good. If you can stuff more data on screen, by using annotations, underlines, fringe markers, whatnot - it's usually a win. As long as the indicators don't interfere with each other, human brain can quickly learn to filter them with near-zero effort.

- Clicking is bad ergonomics. It has high enough overhead compared to keyboard that it's enough to interfere with focus and the state of flow.

- Most animations are bad ergonomics. This, ironically, applies to the author's CodeRibbon[0]. It looks like a great idea. Then you think how to implement it in Vim/Emacs, with less animations. Then you realize that you can get this with some tweaks to how you switch buffer/window configurations, and it'll give you same benefits with better ergonomics.

- The article underexplores the third choice of the predator - enriching the environment. Or, perhaps, we should introduce a fourth choice[1]: meta-enrichment, or developing technology. That is, reconfiguring and extending your tooling to offer you more/different contextual cues (foraging), navigation tools (navigation) and operations (enrichment). This is where Emacs shines above all - for a proficient Emacs user, meta-enrichment is something one just does. Of course, as a developer living in Emacs, I suffer from confirmation bias[2] :).

--

[0] - https://web.eecs.utk.edu/~azh/blog/coderibbon.html

[1] - Incidentally, one that distinguishes humans from the rest of life on Earth.

[2] - I mean, over the past year I spent about a week worth of time on developing what now is 1400 lines of Emacs Lisp[3] implementing some creature comforts, including a "control panel" for a particular flavor of development I'm doing. I could've avoided spending that week if I used VS Code instead, but then I'd also lose on all the benefits I get from a tool that fits like a glove.

[3] - That's 1400 lines of my own Elisp, on top of the ton of third-party elisp packages and customizations specific to them.


The cognitive load would come from unfamiliarity more than anything. It dissipates once you learn and use the system enough regardless of how you interact with it(assuming the UI is consistent and reliable, which far too many aren't)


I've been using the same IDE for the last 10 years because that is what my colleagues use. Over the last decade the UI has changed significantly. Not only that, the keyboard shortcuts also change, and often don't work the same across different IDEs in the suite. For example, the shortcut to vertically select a column of text has changed at least twice. I like Jetbrains, but considering switching to something simpler that changes less often. Might miss out on some advanced features, but I don't use those often, and maybe I don't need it anyway


vi/vim/neovim has a very consistent interface. It does have a learning curve but it is very pleasant to use once you've learned it. A bonus is that you can now edit text on pretty much any machine without installing anything, desktop or server.


I moved from the Jetbrains stack to vim ~3 years ago and haven't looked back. Mastering vim of course takes some time, but getting up and running and being productive does not take too long at all.

It did help that I was mainly writing Go at the time which has the great vim-go plugin. Not sure how I would have done if I was writing mainly writing another language (C#, Java etc come to mind as perhaps benefiting more from a full-fledged IDE).


I just use the vim plugin in Jetbrains products. I get vim navigation, editing modes, basic macros. Vim emulation is not thorough, so I can’t get more complex features, but they seem unnecessary when I get Jetbrains features.


Every time I ctrl-v and then gq in intellij i die a little inside. Why won't it just work!


That's outside of my bag o' tricks. There was definitely some pain getting things to work to satisfaction in the Jetbrains tools. In a few cases I simply had to adopt a different workflow. But I'm feeling pretty happy about it now.


I can't say I understand the formula in the article, but I have interacted with emacs/vim and VS Code to some degree. I've also use several other IDEAS (Eclipse, NetBeans, IntelliJ, etc) and I have to say except for special circumstances (Java or C# like languages), the fact that I can easily navigate code via the keyboard and have multiple onscreen views into multiple files pushed the older editors far to the head of the pack for me. Unless there's some other compelling reason to use an IDE I very quickly dismiss an editing tool in favor of vim these days because of that multi-window/ view aspect. I don't need most of the other things wasting real-estate on screen and hate having to bounce back and forth between files if I can see the parts I need at the same time. It just flows better. If a tool can't give me at least a 2x2 grid of views into source files, I'll probably drop it without looking much further ...


IME more "mechanical" means definitely promote code familiarity. For some languages I use vim as my primary editor -- generally anything frontend. OTOH, at least for me, I'm definitely a lot slower with pure vim when working with java. I use intellij for that. However, i use IdeaVim to get the vim key bindings, so 80% of the time I'm actually still using my IDE very similarly to how I use vim, so I think your point regarding "mechanical" means still holds. My biggest complaint is mainly that I wish I could access more of my IDE's features with just vim key bindings.

Sometimes when I'm really and truly stuck, I will open up a java file in vim. In those situations, the forced slow-down is actually an advantage, and I get a much deeper understanding of what I'm looking at. I can't code well this way (perhaps that's on me), but it can be a powerful aid in debugging at times.


Perhaps I'm being overly pedantic, but "theories" are proved. When I read "theory" I assume that it has been proved. Otherwise, people should really start using "hypothesis" or "conjecture" more often.


You're thinking of "theorem".


That's a wrong assumption. Think of theories as frameworks of explanations that can be tested on evidence to confirm or falsify them, if they are scientific.



Theories can never be proven, we can only approach certainty as more evidence arrives. That's the nature of empirical pursuits. Proof only exists in the world of deductive pursuits.


Is that true? I thought theories were falsifiable explanations of a series of laws, observations, and hypotheses that could be predictive of as-yet undiscovered observations.

"Proven" supposes a conclusion, of which we never really get with a theory.


Google.

40 years ago when I was coding everything to know fitted in one book, C64 complete rom listing explained.


It's not what you need to learn, but you must unlearn all those fishes well-meaning ignorants handed you in the past!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: