Hacker News new | past | comments | ask | show | jobs | submit login
Should programmers learn machine code?
45 points by RiderOfGiraffes on Sept 5, 2009 | hide | past | favorite | 62 comments
I saw my first computer in 1978. It was a Tandy TRS-80, and it was running what today we'd call a slide show. It was, however, entirely in text and very, very simple graphics.

It had 16KB of RAM, and permanent storage was sound on a fairly standard external cassette tape drive. I can still draw the waveform of a bit, and the start/stop patterns. You could program it in a simple BASIC that had limited variables, and no subroutine parameters.

No parameters in subroutines!

But I was hooked. I wrote two BASIC programs, then ran out of patience. It had to run faster! A 1.77 MHz Z80 running interpreted BASIC wasn't fast enough.

So I smashed the stack. I won't go into the details, but I bootstrapped into assembler (the BASIC did have PEEK and POKE) and wrote a compiler. The BASIC version, two copies of the machine code version, and the variables, all fitted into the 16KB memory. Two copies because it wasn't relocatable, so I compiled to copy 1, used that to compile to copy 2, then machine-code saved copy 2 to cassette.

It was cool writing different variants of sort, then sorting the data that was in the memory mapped screen. You could see the heavier items fall to the bottom in quicksort, or migrate one at a time to the top in a bubble sort. It was pretty raw, and enormous fun.

Why do I tell you this?

Today at work I took a subroutine that was taking 171ms per call and making the GUI run as slow as a snail on valium, and re-wrote it to take less than 10ms per call. The methods I used were straight out of the techniques I learned in that first 3 months of machine code programming (not assembler -machine code), and my colleagues couldn't really understand it.

There followed an impromtu "training session" in which I explained how CPU cores work, how machine code can be mapped to them, how assembly code matches machine code, and the way the instructions actually match the hardware, in some sense.

A lot of what I talked about is now outdated, it is, after all, 30 years old. But the techniques are still applicable on occasion. They were amazed that they didn't know this stuff, and intrigued as to how I do.

It's just what I grew up with.

Should programmers in C++, Haskell, Lisp, Python, etc, know these things? Or are they really mostly irrelevant, becoming more so, and soon to be known only by true specialists and dinosaurs like me?




The simple answer: Yes. The longer answer: Yes, absolutely.

As for the why of it:

No other programming language will give you the feeling that it is you that controls the computer as good as assembly ever will, other than the lucky few that get to program microcode.

Assembler is what it all eventually boils down to, it gives you a first hand view at the von Neumann bottleneck and how small the letterbox is through which the CPU views the memory.

After you get a good long hard look at that your programs will never be the same.


Also, understanding of machine language is key to understanding the underpinnings of operating systems and implementation of programming languages.

If you're missing those three, then you've got an important hole in your knowledge as a programmer: Machine language, Operating Systems, Implementation of Programming Languages.

Much of why computers and their software is put together in this was all clicks into place if you have an integrated view of those three.


I think the difficulty lies in the multiple meanings of 'programmer.'

My interest in programming stems from wanting to use them to solve problems, not from theoretical interest in how computers work. Most of my colleagues in the UW engineering department I was in learned just enough Matlab to get the homework done. I learned enough to program a solution to the homework. I originally learned QBasic to program a Mandelbrot-set generator on a 286, because I wanted to generate the Mandelbrot set.

Should a web developer learn machine code? I dunno. You can get pretty far in web development with a good head, some Railscasts and tutorials, and a bit of practice. Maybe the question is "When should a web developer learn machine code?" I think the answer to that question, if he/she is a working web developer, is, "When he/she has to, when it comes up."

For myself, I look at these--let's be frank--esoteric things like machine code, higher-order whatever, compiler-design, and so on and think, "That may be interesting, but I've got work to do and fun to have. I'll wait until I have to know it before I learn it." I've already had to learn about lambdas. Very fun stuff, but also very useful stuff.

Now, should the macho programmer learn machine code? Of course. But the macho programmer should also be manufacturing his own computer chips as well, to better fit with his pure and obviously correct programming methods.


I'd agree that for web development, assembly is almost certainly something a typical programmer won't have to know. I'd argue that C is a good idea for various reasons, but that argument's been had already.

That said, "learn it when you need it" is problematic because of the case where you don't know what you don't know. In my own experience, I've learned lots of things and then thought back to previous problems and had little epiphanies on better solutions. People that don't experience that probably aren't improving very much.

Perhaps it would be useful if various communities (like webdev, etc) put together curriculum guides for "things you don't know you don't know, but should know." Of course, what should be on the list would be a source of bitter contention I'm sure.


While I wouldn't agree that programmers need to learn machine code, I would definitely recommend that they at least learn some ASM and poke around in a few programs with a decent disassembler. Seeing and understanding how a program actually interacts with the hardware at runtime was one of the biggest moments in my life as a programmer. The first time I successfully "cracked" a shareware program, my understanding of how computers worked increased a hundredfold.

Before multi-core processors became common, I had a major argument with another senior developer at work who completely believed that simply adding threads would speed up execution of his functions (that's what threads do right?!).

Edit: The point is, programmers don't know nearly enough about how their programs interact with the hardware they run on.


Really, not many people even think about what's in any level lower than the one they're working on.

Case in point: I once had a C++ program in which replacing malloc() with a wrapper that rounded up all allocations over 1kb to multiples of 1kb actually increased execution speed by 20%. Those may not have been the exact numbers, but that's the general idea. Guess why.

. . .

The program was allocating many large objects with small variations in size. The system default malloc/free implementation would work efficiently for small objects, but would store large allocations in a linked list, bucketed by size. In the worst-case (all objects are large and of different sizes), freeing n objects would take O(n^2) time. This program came very close to that ideal. The rounding prevented that worst case by using fewer buckets.

Of course, that version never saw production use. It was just used to demonstrate the nature of this particular performance problem.


Feel like I should mention: For people running a UNIX os with objdump, take a program written in C or C++, compile it with gdb's -g flag, then do:

  objdump -S your_program
This will disassemble it and interleave your disassembly with original source code. It's a great way to see what your high-level C gets compiled to, a great way to see how compilers work and a great way to scratch a low-level itch.


This was how I built my early reverse-engineering skills. I started off compiling my own C to x86 assembly and seeing what it mapped to, then I had some friends write little crypto routines in C and compile them, throwing the binaries over to me to hand-decompile. Can't recommend this enough.

I also have to recommend the book Reversing by Eldad Eilam if you're interested in low-level hacking from a reversing perspective.


There is another old series by Leventhal about learning assembler, he basically wrote the same book for many different processors.

And the x86 series reference manuals from Intel, if that's the CPU you're working with (a fair assumption).


I second the Intel reference manuals - they've been extremely useful on multiple occasions and have been great at helping me understand the x86 processors better.


Assembly language yes, machine coding as in flipping switches on the front panel is probably going too far. I suppose you dont really need to know how a cpu works to be a journeyman programmer, but I think to be truly innovative you need to know what lies at the heart of it all. If you know instructions, registers, L1 and L2 cache then you can muse about interesting and innovative problems. How can you think about really fast algorithms if you dont know about cache latency? How can you think about distributed functionality in a robot, if you dont know that everything goes through a single pipe in a processor? (OK a small number of pipes.) I know some kids who parleyed their knowledge of the difference between the latency of instructions in cache and those in memory into a significant company.

Some friends and I are going to start an old programmers' home, like the old sailors homes. Obviously we should include you as a charter member.


I know some kids who parleyed their knowledge of the difference between the latency of instructions in cache and those in memory into a significant company.

That's interesting. Who? What? How?


I learned 6502 in the 80s and did a lot of x86 in the mid 90s and.. I'd say "not anymore." Sure, it's useful to get a feel for it and have some experience with it, but I'm assuming "learn" in this case means a higher level of competency than just a broad understanding of it.

C is about as low as you need to know nowadays. x86 was critical to know in the 90s due to performance issues, but since the majority of software nowadays is running on some sort of VM or pulling heavily on frameworks (CLR, JVM, OS X's frameworks), it's mostly pointless to use x86 in all but the most unusual of projects.

The only reason I'd still advise a basic familiarity with assembler is to rid the notion that C is as low as it gets - you can get totally different results from different C compilers (and even incompatible results in some nasty cases) and it's worth at least knowing there's something underneath there that you can take a basic look at.


Yesterday I was debugging a device driver for LCD controller using an oscilloscope. Low level programming might not be the hot tech these days, but there are tens of thousands of people who are good at it, and many were born well after you started with computers. I would maintain that learning it is not that hard, for an average, good, working programmer.


The Story of Mel... An inspirational story of what true hacking is about.

http://www.pbm.com/~lindahl/mel.html


Also good is Ulrich Drepper's "What every programmer should know about memory". It talks about what happens at a very low level from the CPU registers and L1-3 caches to system memory and disk, and how the OS handles it all.

8-page series on lwn, with comments: http://lwn.net/Articles/250967/ [pdf] http://people.redhat.com/drepper/cpumemory.pdf


that one should be required reading for any programmer.


Agreed. There are a few groups around who still keep this kind of spirit alive. Seeing what the guys in the demoscene can to with 64k is truly amazing, but what they do with 4k can blow your mind.


Or 64 bytes: http://canonical.org/~kragen/demo/klappquadrat.html (I wrote the explanation but not the code!)


Opera gives a Fraud Warning on that URL.


You shouldn't believe everything your browser tells you ;)


> Safari can’t connect to the server.

I don't believe him.


that's down for me!


Yeah, the server is down at the moment. Check out a cached or archived version, or wait until tomorrow.


I don't have this low-level background, but am increasingly coming to the conclusion that it matters a lot, and that the vision of high-level abstractions entirely divorced from hardware is a pipe dream. You can get away with it on easy problems but not hard ones.

Alan Kay says that people who are serious about software have to be serious about hardware.


I think it's good to know. However I use it very rarely.


I think if you really know then you'll be using it without even consciously thinking about it. You'll make sure that you don't allocate storage units across page boundaries, you make sure you access your arrays in the right order and so on.

The cumulative effects of that are huge, and it's more of a method of working than something where you take the assembler manual out. Just like an engineer is not going to go and re-learn the ins and outs of materials science when building a bridge, it is the solid background in materials science that guides him in his choices.


Well put. It's this intuition that I am feeling the lack of. It's wrong to imagine that one can create arbitrary abstractions for free in high-level languages and let compilers and fast hardware take care of the rest. That approach takes you so far, but when you run up against its limits you are really in trouble. That is most true when what you're working on is computationally hard somehow, but it also manifests itself in the reality that most software is simply slow (a handy example being the browser I'm writing this in, FF 3.5).

This is changing my view of programming languages. I'd like an environment where abstractions are built up in transparent layers that don't hide the underlying architecture, but still give you powerful operators to compose things. The standard options mostly require you to pick one or the other.

Put differently: rather than thinking of expressive power as being built up in layers that move the highest level ever further away from the lowest, is there a way to keep high and low relatively close together while still providing lots of expressive power? The closest thing to this that I know of are Arthur Whitney's languages. The latter are notoriously cryptic, but unless I'm mistaken they do provide both of these things: an execution model that is close to the hardware and a very high-level, logically scalable language.


Here is an idea:

A graphical environment where you can see the program as it is running, with the memory map displayed in an arcade game style window.

I can imagine that such a tool would be a great eye opener.

Show hotspots in real time, move your cursor in to the memory map to get the 'hint' to display the label of the memory location at that spot.

An advanced version of it could show you the two layers caching next to the main memory and how data is flowing in and out of the caches.

I'm pretty sure that something like that is doable.


It's a great idea. I love it. However, there are only two ways I can see something like that getting built: the large way or the small way. The large way would be as a tool designed to work with one of the major languages/environments. It would have to deal with all the bloatedness that implies, plus the fact that those environments were never designed to provide this sort of transparency or hackability. It would probably be a large corporate project that took years and cost millions; this probably won't happen. The small way would be because one or more hackers decided to build something new that stayed close to the hardware and was designed for this sort of thing. That's much more doable, but puts you well out of the mainstream for a long time.

I wouldn't be surprised if people eventually come around to this point of view. It will take a while, though. The current thought is that parallelism will save everything. It could take a decade before people figure out that that isn't going to happen. Maybe then there will be a renaissance of going back and rebuilding things from the ground up. But I'm rambling.

"People who are serious about software have to be serious about hardware"... that Alan Kay has a gift for putting his finger on things.


I can see a way to build it fairly simply.

The object information is readily accessible when you compile your program with -pg, the memory map is accessible through /proc, it shouldn't be too hard to couple the one to the other and make a nice display. If you are clever about it you might even be able to fish out what memory is filled with what type of data, and colour the map accordingly.

No millions required, unix is pretty clean when it comes to stuff like this.

Some joker had written a shell replacement based on 'doom' where you could literally shoot the processes. You had to hunt them down first of course...


I see now that you're talking about the OS level where I was thinking more at the language level. Still cool. Let me know when it's ready :)


An emphatic YES. High level runtimes break, behave oddly, or do things that you wouldn't expect. Understanding how the machine works (and by extension, how the virtual machine works) is key to understanding:

* Why your program behaves the way it does

* Why your performance profile looks the way it does

* Gives you the skills you need to debug inevitable problems in your high level tools.

I highly recommend Write Great Code: Understanding the Machine:

http://www.amazon.com/Write-Great-Code-Understanding-Machine...


It's worth knowing the fundamentals, even if you don't use them much. The law of leaky abstractions (http://www.joelonsoftware.com/articles/LeakyAbstractions.htm...) means that you will occasionally bump into a situation where knowing how the underlying layer works will mean you can solve a problem that someone else can't.


RiderOfGiraffes: What were the techniques you applied, and why were your colleagues mystified? I assume you didn't rewrite the routine in assembly. It seems that you have some interesting evidence about the question you're asking, but you left the meaty part out of your story!


Oh, gosh, where to begin ...

Actually, most of what I did are standard techniques that assembly programmers know, C programmers have seen, and web programmers don't need to bother with.

I can't actually post code here because it's commercial in confidence, yada yada yada, but I will mention the techniques.

Firstly, they were processing an entire image, then transforming it, then getting the part they wanted. We tranformed the program flow to be demand-driven, which meant that large amounts of the image weren't processed at all because they weren't needed.

Then we analysed the data and deduced that by taking logs we only needed a dynamic range of 6 bits. We therefore took the entire image into a byte map with the top two bits of each byte set to 0.

Processing 4 bytes at a time in words saved enormous amounts of memory access, and avoided register spills. Unrolling loops by the right amount meant that we stayed in cache, but had less overhead.

Once the processing had been re-worked, we looked at the assembly produced, and found that most of the time the processing could be separated into a double indirection.

Finally, by further reducing the resolution of the system, computing an approximation, then reworking those parts of the data where the errors were too large we managed to halve the work being done.

It was a good two days of analysis, and the code shrank by a factor of 2, but it's now tough to understand. We have documented the results. It's embedded work, and yes, we finally went to assembly to get the last 30% of performance.

Every part of it is standard in its field. I think some of my programmers hadn't created routines to work byte-wise eight bytes at a time in a 64 bit word, and others hadn't done the transform via logs to throw away unnecessary resolution. Another had never analysed unrolling a loop as compared with staying in cache. We used tricks such as (x & -x) to find the least bit of a number, although in 32-bit space that becomes something like (x & (!x + 0x01010101)).

And so on. A large collection of esoteric tricks, all at once.

Let me finish by saying that programs can't be made to run faster, they can only be made to do less work. An interesting aphorism when "optimised" code.


As someone who is trying to learn those sorts of things, thanks for the explanation.

In my experience in trying to learn this stuff, I have to say, I can't say the young people around me are trying to do the same. I think they're of the mindset where they just want to solve their problem, and whatever gets it done with the least hassle is 'good enough.' The truth of the matter is that they're generally right. Computing resources are so expansive that for a lot of problems are of small enough n, that people ignorant of data structures and algorithms get by just fine. Just using decent DS & algorithms will get people the speed boost they need most of the time in the higher level languages. If you're really in a pickle, you break into C.

When I was working as a web developer, I never saw someone other than myself actually get down to the C level. They just bought more machines and scaled that way. This stuff is fast becoming something of a lost art, but there will always be domains where it is necessary. I happen to think that's an interesting place, but the trend in development is well away from it in most other domains as it climbs up the abstraction ladder.

EDIT: I'll just add that I've seen the ignorance of these things (just at the algo level even) leave money on the table. I think people worship at the table of 'cycles are cheap' a bit too much sometimes and discount hardware and maintenance costs a little too readily. It's a balancing act.


> programs can't be made to run faster, they can only be made to do less work.

That's one for the fortune file.


Awesome! The logs trick is one I hadn't seen before. Why were you using 32-bit registers instead of SSE registers? The CPU you're targeting doesn't have SSE? And how did you structure the demand-driven part — by 8×8 tiles, or a bounding box, or what? (I assume you didn't leave in multiple levels of function calls per pixel.)


I can see there's a mistake in my reply - we were working on a 32-bit architecture for this problem. We didn't pack 8 bytes in a 64-bit word, we packed 4 bytes in a 32-bit word. Sorry.

And in short, our target CPUs don't have SSE. I've only been vaguely aware of SSE, although I used to program a Cray on occasion, so am familiar with vector processing. I must see if the SSE instructions can accelerate some of our other work which isn't yet critical, but may become so.

The demand driven stuff was done by taking 32x32 tiles in the target and reverse transforming each to the source, hence finding out which 32x32 bit tiles in the the source needed to be computed. Flags prevented redundant multiple conversions. The transformation doesn't preserve straight lines, so some care was required on boundaries.


Very interesting, thanks!

From reading a little bit of T90 documentation, SSE looks pretty different from the Cray vector stuff. It's little more than the kind of (x & (!x + 0x01010101)) kind of stuff you're talking about above, except with somewhat larger registers and a wider variety of instructions.


ok, not quite as hard-core as to write my own assembler, but i grew-up with ZX-81's and C-64's, and like-wise: if you wanted speed, BASIC couldn't do it, and a C-compiler couldn't fit in the memory (IIRC, games writers cross-compiled any C on PC's).

And it's true: the best optimising compiler in the world cannot work out the intentions you had for your code -- sometimes choosing one type of algorithm will optimise quite differently from your intention -- and if you have even basic understanding of what's happening on the hardware, you're already ahead.

Funnily enough, in my last job I met someone who had the same sort of experiences, and understanding of the actual hardware, and we lamented that no-one was growing up these days understanding the low-level stuff, incl. bus dymnamics, interaction between I/O, memory and CPU, etc, etc.

Doubly interesting was in my current job, met someone who understood JVM byte-code as well as assembler, and he claimed it was key to understanding how to get truly great performance from Java.


While I'm not too sure that understanding the CAFEBABE stuff is truly helpful for improving Java performance, I definitely agree its helpful to know what's going on in there. Bytecode usually decompiles into very neat code since the mapping between the code and bytecode is very tight with most optimisations taking place in the JVM.


It's certainly worth understanding what's happening one level down from the level you work at, at least. Sometimes even lower-level effects seep through -- e.g., even if you're writing in something like Haskell it's probably worth having an understanding of memory hierarchy issues. I'm not convinced that "machine code (not assembler -- machine code)" buys you much over assembler, though.

I'd guess that much more software slowness comes from other failings, though. Poorly chosen algorithms that would make for slow performance even if written in machine code and microoptimized by experts. Forgetting that a random disc read takes millions of CPU cycles. That sort of thing.

(Most of the programmers I've worked with have been writing embedded stuff, or close enough to others who are doing so, that they have some exposure to assembler-level thinking. Though I expect those of us who grew up in the age of 8-bit micros could still teach most of them a trick or two.)


Machine code, the binary code that drives the decoding and execution logic in the CPU is certainly useful.

Once you have a thorough understanding of how to program a CPU and familiarity with the instruction set, you'll probably take a look at the output of your assembler to see what it is that it produces.

That and a reference manual will give you a taste for machine code.

As long as you do not use macros it is pretty straightforward to convert machine code back into assembler, macros muddle the waters here because a single assembler macro can generate lots of machine code.

There is value in machine code too, because the instruction set is created in such a way as to facilitate decoding by the simplest set of microcode to achieve a given amount of functionality. Silicon is expensive so it pays off to design your microcode and by extension the instruction set in this way. Machine code opcodes can usually be split up into 'fields'.

The elegance of microcode is something that has its own beauty, but it may be that that - just like machine language - is an acquired taste.

I realize riderofgiraffes wanted to make a distinction between learning assembly (which lots of people do) and learning machine language (which probably less people do), but since the one can be trivially translated in to the other machine language is more of an exercise in hand assembly and studying the layout of opcodes than anything else, it comes with the territory of anybody that takes assembly serious.

Unlike for instance "C", which you could be programming for a lifetime without ever looking at the assembly output.

If you used an assembler to translate your assembly program - without macros - into machine language and you then ran the resulting dump using nothing but a reference manual and a pile of paper you'd effectively be executing machine language. They go hand in hand.

But learning assembly (usually) comes first.

If you want to go one step further you can create your machine language programs without the aid of an assembler, I once knew enough 6502 and 6809 opcodes and operands by heart to do this for programs that were longer than was probably normal, even today I still remember a couple of them even though I haven't had any use for any of that lately, I should probably 'garbage collect' ;)

I have found that particular skill to be less of value because as soon as I could I wrote an assembler to do those pesky branch computations for me.

The main driving factor here was the cost of an assembler, on pocket money it was a lot cheaper to go to the library and look up the opcodes then go home and program the thing instead of spending a years worth of 'income'.

Hand assembling small bits of code (say a subroutine less than 50 bytes long to speed up a piece of code when no assembler was available) got me some pretty scared looks by people that I met though ;)

The benefits of machine language over assembler are there, but the 'return on investment' is probably diminishing at that point. Many people would argue that that already goes for assembly, obviously I'm not one of those.


An understanding of how of your processor works is quite useful for any programmer, but I don't think learning machine code is worth the effort for most people.


Actually, Stanford recently redesigned their computer science curriculum. Intro programming (CS106) does Java and C++, and then the notoriously difficult CS107 does low level C, Assembly, and some machine code


No not all of them. I think there can be a tenancy to think the way that one learnt to program gives the best grounding: this is a bias. For my money there are different types of programmer and depending on what sort of project you are doing you might want different mixed. That can mean knowing both your strengths and weaknesses and those of your colleagues.

By far the best attributes of programmers are the want to constantly improve their code and experience. However sometimes a wealth of experience can lock a programmer into a fixed way of thinking. I know quite a number of C++ programmers who seemingly don't seem to get OO for instance. Others may not approach a problem in the best way.

You journey seems to imply should new programmers follow your path to get an understanding of machines. A lot of this technology is not relevant. Multicore systems are fairly modern, and hacking java bytecode is probably just as good preparation as hacking a Tandy. In face many of those old microcomputers were much simpler that even some VM architectures.

It sounds like you may have been hired into a team precisely because of this sort of experience you have. However not every optimization these days is a low level optimization. I note the YouTube architecture and how much of the service was/is written in a scripting language.

It wouldn't take long to find an opensource project that you could easily make use of these skills. Far from being a dinosaur there are whole industries that still use this sort of knowledge: but perhaps not on the desktop.


At some point in your career I think it's nice to sit down with Assembly just so you understand what's going on down there. But I don't think a programmer should do that until they have a rock solid understanding of modern day programming.

The reason I say that is because modern languages have abstracted most of the direct interaction away. So a preoccupation with what's happening at the assembly level can be more of a distraction if your language of choice isn't already second nature to you.


I'll show my age here and say that in my CS program, we had (like many) a 'final project' where we assembled a wire-wrap computer from a Motorola 68hc11, some ram, rom and a handful of logic gates, then wrote a rudimentary os in assembly. Even then the 6811 was woefully obsolete. The lessons were timeless.


Your final project is today's automotive hackery. :)

It might be "woefully obsolete", but a derivative of the HC11 is in an engine control unit (ECU) from a 90's-era Mitsubishi model that quite a few folks, myself included, take a hobby interest in as means to making our cars go a little faster than is reasonable or was intended, on the cheap.

Another platform (that I have a much stronger personal interest in) is SH4-based. 256k of ROM (1M on newer versions of the car), 256k of RAM (512k on the newer version), and that's all you get to control a modern fuel-injected, variable-timing four-cylinder engine in real time.

This "old stuff" is still very applicable, depending on the field; the automotive industry is wrapped tightly in a time distortion field. ;)


I believe you got the wrong conclusion from your story. They don't need to know low-level details because you do. Must they also understand payroll, marketing, sales, and floor-mopping just to work as a php programmer?


No, because payroll, marketing, sales and floor-mopping do not help you to become a better programmer in PHP. But knowing some assembly definitely will. Especially if you go so far as to have a look at the paging mechanism and cache structure of the cpus that you are using.


Understanding how all the components in a computer system work is important to being a good programmer, especially if you're doing work that is performance sensitive. Memorizing a particular instruction set probably isn't important, but a good programmer should understand the basics of processor architecture and performance. If you know that, you can always learn whatever ISA details you need on the fly, in the relatively rare case that you need them.


No, you don't have to know machine code to appreciate parsimony or performance.


Anyone know what intro class in uni would be a cool intro to low level programming?

I'm experienced with LAMP and bits of C++\java.


Maybe a course like "Computer Architecture" would be what you're looking for. At U of Illinois, they start with logic gates and go from there.


How would one go about learning machine language for, say, x86? I already know a fair bit of assembly.


"... Should programmers in C++, Haskell, Lisp, Python, etc, know these things? ..."

Learning machine code for programmers is what Latin is to to reading and writing.


Learning machine code for programmers is what Latin is to people. FTFY.

Knowing machine code isn't an academic endeavor for many programmers, much like knowing Latin isn't an academic endeavor for doctors. Understanding the machine at the lowest level is essential for kernel, compiler, emulator, and embedded development, as well as software reverse-engineering. It may be a good idea for many programmers, but for a subset of us, it's mandatory.


On the other hand, a doctor need not know all of Latin (especially the fiddly bits about case and conjugation), but enough to pick out the roots of words they come across. While I'm sure there is a small subset of doctors who actively study medieval literature for, say, bits of age-old wisdom that do hold a grain of truth, for most practicing physicians, they merely ought to have a grasp that, say, renal refers to kidneys, etc.

Similarly, I don't think programmers ought necessarily be able to compose particularly good machine/assembly code. I'd suspect, however, that having been exposed to it at some point will make one be able to handle a broader range of situations better.


"... Knowing machine code isn't an academic endeavor for many programmers, much like knowing Latin isn't an academic endeavor for doctors* FTFY..."*

Poor choice of phrase: Latin, the language is a prereq for Natural Scientists describing phylum. For Doctors Latin is used every day because the physical structure of humans is described in and derived from Latin. This isn't an just an academic pursuit.

I understand what you're getting at btw. But programmers who don't understand MIX probably don't know about Knuth. When I'm writing to HN and use the phrase "programmers" and "writers" I'm neither referring to "Accountants programming Excel" nor "Shakespeare as translated by Lolcats".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: