Hacker News new | comments | show | ask | jobs | submit login

I love the "understand how it works before you use it" angle; this is how I was taught in school and it's how I practice. I had to build (simulated) CPUs and RAM out of transistors, an assembler and eventually compiler to run programs on my simulated computer. I have great respect for the machine (and also a deep hatred) but mentioning Von Neumann to a junior engineer these days will get you written up by HR (hostile work environment).

Unfortunately, today's new developers are die-hard black boxers. They were taught that software is built by integrating Twilio with Mailchimp and deploying to Heroku. Furthermore today's new managers are black boxers as well; both towards the software their teams produce as well as the teams themselves.




> today's new developers are die-hard black boxers

How is anything else even possible? I spent eight years doing CPU development and even at the time I wouldn't say that I understood how CPUs worked. No one understands how an entire desktop or server CPU works today. They're too complicated. Back in grad school, I had a fairly good solid state & device physics background, stronger than most EEs. Ten years later, if I now want to understand how a modern transistor works, I expect it would take me 1-2 months to get a good high-level picture. And that's just the digital stuff -- I never had a good picture of how analog circuits work, and understanding that would take years.

In terms of complexity, the CPU is much less complex than the compiler or OS I use. For that matter, it's less complex than the browser I'm using to type this comment. If you don't believe that, look at the number of person years that goes into developing a CPU vs. the amount that's gone into any of those pieces of software. Relatively speaking, CPUs are simple. And they're still so complex that no human can understand them.

There's no way to understand the entire stack below you unless, maybe, you're doing materials science work and don't have anything else below you in the stack. And even then, "understand" is sort of an iffy term to use. When I was up to date in the area, there were a lot of open questions on phenomenon that worked but had no proven "why". While I haven't looked into it, it's a pretty safe bet that the total number of such mysteries has increased.

This isn't to say that people shouldn't develop some understanding of the building blocks they're using. Rather, understanding is a difference in degree, not a difference in kind. There's no such thing as understanding everything you use. There's just, at the margin, spending a bit more time on understanding components vs. spending a bit more time understanding how to use components.


Well it's really a balance. Looking at the CPU, these days the x86 instruction set is a total illusion--the CPU decodes those instructions so a totally different underlying architecture can take action on them. The block diagram for a modern processor is astounding.

Back when I was working on Motorola 68K processors and creating hardware drivers, learning to work with assembly helped me tremendously with basic principles like pointers/registers, stack/heap, memory-mapped IO, and so-forth. How the processor worked inside wasn't really the point; working with assembly made me a better C programmer.

The point isn't so much that you have to keep peeling back the layers of the onion so far that you can understand how the silicon is laid out, you just need to peel back a couple more layers than you're used to. If you're a Ruby programmer, for example, write a C extension. It'll be enlightening.


> The point isn't so much that you have to keep peeling back the layers of the onion so far that you can understand how the silicon is laid out, you just need to peel back a couple more layers than you're used to. If you're a Ruby programmer, for example, write a C extension. It'll be enlightening.

Thanks, this is just right. You don't have to meet the turtle on the bottom, but you should be acquainted with a few turtles down.


I'm not sure running on AWS instead of on your own server is actually going down a turtle. I run pretty much everything myself, but that is mostly dealing with complexity which isn't very enlightening. Might very well be that someone who uses a provider gets a better understanding of infrastructure design.


Running on AWS is going up at least one turtle, possibly several if you're throwing configuration management and deployment tools into the mix.

AWS abstracts out bare-metal through Xen, deployment images, the AWS control panel, its monitoring, and the fact that you can't physically grab and yank the server should you want to.

On bare metal, you've removed most of those layers, though you have the option of putting most of them back in. I find the head for AWS vs. bare-metal admin to be quite different. Being old school, I prefer metal.


I'm just not sure that those layers are actually functional. I see little reason why things couldn't be more like AWS if there were standardized ways to e.g. configure things. A lot that one deals with is glue between the turtles and not necessarily the other turtle.


Everything is glue between turtles.

It's glue between turtles all the way down.

(A problem now is that the glue is nonstandardised, and you've got vendor-specific glue, much of it with craptacular Web interfaces, at various levels. We used to have this in other parts of the stack, but they eventually got commoditised, and those particular layers went away. If you haven't done it before, crack open a Sendmail.cf file. If you have done it before, I'm truly sorry for the flashbacks.)


Yup, even a basic understanding of CPU caches will go a long way to help you work out performance. It's very realistic to see 10-50x with just simple data layout patterns.


There's a danger too in having too simple a model of how a CPU works in your head. It doesn't scale well to how a real modern CPU really works. I see this a lot with developers who have a very oversimplified mental model of how memory management works, and more so with networking. Even if you do have an accurate appreciation for how a particular system functions, it can be dangerous to overly rely on that functioning - it means your software becomes tied to a particular CPU architecture, or network topology - and you don't get to take advantage of advances in the field.

Sometimes it is better to build your software on the assumption that underlying technology is a provider of black boxed services, and that over time the way in which those services are provided might change.


It's even worse, I'm regularly meeting developers in my field (Android) which not only use only black boxes, they will refuse to build their own libraries if they're not provided by Google or other 1st party provider. Example: People didn't want to implement a FAB (which is essentially a shadowed colored circle with an icon in it) and demanded that Google provides the code to them O.o

Maybe I'm spent too much time hammering things together with C/C++, but this kind of mind set is really foreign to me O.o


I liken this to a student pilot learning to fly in brand new Cirrus. Very quickly he'll be flying advanced missions, but at the expense of fundamental stick and rudder skills.


So... Cirrus is the Ruby on Rails of aircraft?

Quick edit: Most of the dollars I've ever made are thanks to Rails. But I know how a computer works.


Yes. The analogy works all around, even the debates that pilots have about the Cirrus versus a simple aircraft resembles those debates of developers.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: