Current real-world systems are too complex to simulate accurately in your head. I've learned to assume that I don't really understand a system until I've collected sufficient trace output from the real thing.
Are they? I "run" systems in my head all the time. I'm familiar with the code base, from that I'll know roughly where the bug is. I can tell you which function a bug will probably be in and how it's probably happening, just from having the symptoms described to me.
Hell, even when I'm not familiar I can guess what's happening based on past experience.
It's some sort of spatial ability, like navigating a map (though I recently found out I'm terrible at visualizing and almost completely lack a 'mind's eye'). I also find I rarely get lost in computer games and very rapidly learn FPS maps.
Maybe's it's one of those weird skills you don't even realise some of the population lack.
Personally I've always thought using traces are a massive crutch that are too expensive in cost vs benefit ratio, I don't get the point. Far too many extra lines of code for too little gain.
We are literally weeks into Meltdown/Spectre. The code you write runs on a complex programming language you didn't write, that may be running on a VM, that is running on an OS, that is running on a multicore CISC processor with 100s of publicly known instructions, that can only be built due to modern developments in quantum mechanics.
You run idealized versions of systems in your head.
"You run idealized versions of systems in your head."
Definitely true but running idealized approximations of systems [1] are also sufficient, at least in my experience, to get a rough idea of where a bug might be a majority of the time.
Certainly. I guess my main objection was to the parent questioning whether modern systems are "too complex to simulate accurately in your head". Effective programming, as you note, requires acknowledging that you are approximating, and understanding the scopes and ramifications of those approximations.
> "You run idealized versions of systems in your head."
Yes, they're idealised versions, but that still has value. The point at which the idealised model doesn't match up with reality is the point at which further investigation is required.
It's an effective way of filtering out the parts of the system that you already have a reasonable grasp on so that you can focus on parts that you don't.
On a more general point, almost all models are idealised approximations. The whole idea with modelling a system is to abstract away the rough edges by encapsulating complex behaviour in discrete groups. By working with these simplified models, the idea is that you gain a better intuition of the activity found within a system.
In short, the whole purpose of models is to guide intuition.
Maybe I'm really lucky, but in my entire programming career (12 years) I can only remember 2 or 3 bugs that weren't caused by errors in the top-level, most abstract, version of the code.
i.e. bugs that were due to compiler/bare metal/etc. problems. I remember a bug in old-skool asp that would only surface due to some weird code condition, a bug with an IMAP server we made used by Outlook that was caused by them using an int16 to store ids, and a race condition bug.
I've worked almost entirely in web dev with a mix or enterprise, b2b and b2c systems, with some of those having millions of users, some of them being intensive SQL.
Yeah, this is exactly how my brain works, and has led to my specialty being finding the difficult bugs that have eluded others. I've got a loose leaf notebook that I use to help, but all that's really in it is rough drawing of circles with labels scraweled next to them, fashioned in a shape with maybe some lines between them. All it's for is helping me imagine the system better in my head to connect the dots.
It's affected my design philosophy too, I always try to design software such that I could draw it as a series of discrete components with input/output arrows between them (and, of course, the assumption that could theoretically zoom in to a component to see whats inside of it)
I work this way to a large extent. I have pretty good recall of the whole codebase, sufficient to step through in my head and spot actual bugs while AFK. I really dislike it when people make assertions about what others supposedly cannot do.
I believe this is called "being mechanically minded". If you understand the physical rules and the components, you understand the inner workings. Good mechanics have this. Bad mechanics don't. Sadly being an auto mechanic pays little, but the skills are obvious in good programmers too.
That is still thinking within the boundaries of programming logic. The point that Ted Nelson makes is that you should never stop thinking beyond the immediacy of that, and always remember what the code is ultimately hoping to achieve in the end:
> "Thinking if I hit that key what should happen if I hit that key..."
The answer to what should happen is not rooted in thinking about the code. Programming is taking meaningful thoughts and turning them into thoughtless instructions that can be executed by a machine. And that is hard!
We take it for granted now, but something as basic as the idea to use on- and off-switches (which is what bits are) as representations of numbers, let alone strings of letters, is a ridiculous leap of thought. Then we built more complex software on top of that more similar ridiculous leaps of thought. None of those leaps came directly from the code itself.
This implies you are trying to simulate fully. Just as most programmers don't have to simulate the rather complex circuitry in an adder, you probably don't have to simulate the complexities of most of your system. Or, when you do, you can take the simple parts for granted.
Similar to plotting a trip to the grocery. You don't think of the effort your body is doing to stay balanced. Nor of reading labels
Yet you probably have a decent idea how long the trip will take.
Maybe "simulate" is the wrong word, then. Especially accurate simulation. Plotting to shop groceries is not a simulation. You create a model of the problem and roughly base your solution on that. The model is based on a very large number of assumptions and a great deal of ignorance of the entirety of the process it represents.
You can relatively accurately predict that when you press "OK", the dialog will close. You can maybe assume and envision a future where calling "closeDialogBox(box)" will close the dialog box. Maybe the lines in it that which simply call "box->close()" and return "STATUS_OK" can be assumed to do what you think it will do. The underlying machine code, too. The CPU, sure.
In the end the whole system is beyond your control and is subject to random input like cosmic rays flipping bits, power outages, leaking caps. Before that you have the operating system, the compiler, the overall codebase, perhaps more likely sources of mental model mismatches. I certainly wouldn't draw the line of "accurate simulation" at the source code base.
IMO one of our strengths as humans is that we can easily create these simplified models based on rough probabilities founded on previous experience, "gut feelings" and weighing risks and go great lengths without accurate simulation.
I'm sympathetic to your point, but I suggest you have the wrong take here. It might have been better if people referred to modeling the system, but that ship sailed, as it were.
That is, we are primarily discussing how people use that phrase in informal communication. And the you'd be surprised just how far many can take the simulations in their head.
It's not like "simulation" is commonly used to describe one's thoughts. So your idea is that we're just arguing about some sort of hypothetical colloquial sense of the word, even though it's obviously used here in the context of comparing humans to computers?