Hacker News new | past | comments | ask | show | jobs | submit login
Getting started with bare-metal assembly (johv.dk)
422 points by a7b3fa on March 25, 2020 | hide | past | favorite | 181 comments



I think one great way to do this is to get a Commodore 64 emulator (or Atari 2600 etc) and start writing and learning 6502 assy. Arguably its one of the last instruction sets that was designed with humans writing code in assembly (and not in some higher level language) making it excellent learning language. You can readily run your code in emulators and for not too much $$$ you can pick a real hardware from EBay to really run it on HW.

And once You think you’ve run hw to its limits there are plentiful demos and examples around to blow your mind watching what people have been able to do with these machines (often in obscure ways).


Commodore 64, sure, but I wouldn't recommend the Atari 2600 for beginners! I wrote a small game for the 2600 years ago, and it's a tricky little machine. It doesn't even have a video frame buffer, so you have to do things on the fly, depending on which line of the screen is being beamed (think of old-school CRTs).

Indeed, a whole book was written about it: https://en.wikipedia.org/wiki/Racing_the_Beam

The 2600 is fascinating and fun to code for, but for asm newbies I'd recommend the C64 or NES...


Or perhaps the GBA, which is modern enough to have a framebuffer, but old enough that the framebuffer is just sitting there at a known address.


I can additionally recommend the GBA as an interesting fixed target. Lots of folks recommend an Arduino for this, and those are great little machines, but they have two problems. 1. they can be a bit too limited for a lot of potential projects, and 2. because they are hardware projects boards, they don't do very much on their own. Figuring out what to hook them up to is half of the fun, but it can be a daunting choice for a beginner, especially someone learning assembly language for the first time.

The Gameboy Advance is a marvelous little platform. It runs a modified ARM7, so if you learn its machine language a lot of that knowledge will transfer into the industry. It runs at a brisk 16 MHz, which is fast enough that it can run compiled C code quite comfortably, but slow enough that performance still matters very much once you try to push the system's limits. Even if you run it in C rather than assembly (perhaps ideal for a true beginner), the whole machine is chok full of bare-metal standbys. Most of the graphics and sound hardware is memory mapped, requiring very specific bit patterns written to special addresses to tell the rest of the hardware what to do. Working with graphics requires an understanding of interrupts and synchronization. Finally, being a games system with a screen, buttons, and a speaker built in, there's a lot of projects you could build... but the easiest is to make a game! And what a blast that system is to make games for. Powerful enough to do all sorts of compelling things, but simple enough to program (even as a beginner) that you can easily have backgrounds and sprites onscreen in about a day of effort.


GBA = Game Boy Advanced


I learned assembly on a TRS-80 Color Computer (6809 CPU.) Interactions with the machine were fairly straightforward.


The 6809 was a joy to program in assembly language. I enjoyed some aspects of Z80, and loved the puzzle of getting things done with 1802 and 6502, but 6809 was, for me, the sweet spot.


Two stack pointers, one for call/return, the other for push pop data.


I've always been skeptical of using retro machines to learn low-level programming.

While the processors are simple, making non-trivial programs is hard, because the machines as a whole have lot of limitations, making programming very intricate, compared to more modern systems (say, 16-bit 80x86, but I guess even Amiga and so on).

If the target it challenge for the sake of challenge, then nothing makes those machines special, I mean, one can code directly in machine code if that's the intention :-)


I had to teach an class recently that required introducing students to assembler. I designed an extremely RISC architecture (we’re talking mov into pc for control flow, r31 is “suggested” for use as a stack pointer) but gave it just enough that they could write more advanced instructions in assembler macros, which they designed themselves. I think it worked out well!


One homework problem I had in a computer architecture class circa the late 90s was, here's a hypothetical computer, now implement this instruction using the given microcode definition. I really enjoyed it because I was able to do it in much fewer micro-ops than the instructor expected.


In the early 80's my Computer Organization class the entire semester was dedicated to building a simulated 4 bit computer, with I/O channels (disk, screen, keyboard, printer) and a stripped down basic interpreter, and stripped down language compiler (it was a made up language).

We had to simulate everything, and the final project was to take the professor's Basic program and execute it correctly, and compile his other program and then execute that correctly.

That class was the absolute best thing I ever did, I learned so much in that class. It was the only class that I went to every day and took seriously, I got the only A in the class.

I had started programing back in the mid-70's by the time I got to college in 82 I had a bunch of side jobs writing code. I didn't focus too much on my college career, I was working my side programming gigs, and I was tutoring other students. That class got my attention and I loved it.


Finally an application for MIX!


Well, kinda. The architecture is minimal but I had a strong focus on orthogonality, a linear address space, and clean instruction decoding; so much so that I named it REGULAR ;) The ISA (https://github.com/regular-vm/specification) was specifically selected so that traditional control flow could be created with the use of only one temporary register (some take a bit of work; conditional branches decompose to branch islands for example).


Yeah I was being tongue in cheek. Your approach sounds great for encouraging people to think about and find solutions to specific issues.

MIX was, in theory, supposed really to be the opposite: so universal/generic that you'd ignore the language and focus on the lesson. Similar to the motivation for using Scheme in SICP. Obviously times have changed :-).

Sort of the difference between putting on eyeglasses to see the world better (MIX) or putting on eyeglasses to learn about how lenses work.


Have you seen MMIX? It's my favorite RISC architecture (unlike MIX, which isn't RISC)!


If you end up doing anything embedded you’ll likely be working with machines that also have very limited capabilities (you may be debugging by toggling a few pins and looking at the results on an oscilloscope) so I’d argue that these old machines can provide a lot of infrastructure to get comfortable with thinking this way.


I think the opposite is true because those machines were built around assembly programming. The hardware was essentially the software development framework.

Check out this assembly programming series, it's less assembly code on the Amiga to get something on screen or getting mouse/keyboard input than with highlevel languages and APIs/frameworks today:

https://www.youtube.com/watch?v=p83QUZ1-P10&list=PLc3ltHgmii...


A couple of decades ago there was a demo tutorial called "Asphyxia demo trainer"; it explained many effects and even basic 3D, both in assembler and, in a revised version, in C.

> I think the opposite is true because those machines were built around assembly programming.

This isn't true. It was extremely simple to get something work based on that tutorial, and to move to something more complex.

Doing that on an 80s 65xx-based machine would have been significantly more difficult, because that generations had significant limits in the coprocessors (AFAIK, the C64 could play music in games, but it was a workaround; it wasn't designed to do so).

Surely Amiga is easier, as the coprocessors were significantly more capable, but the parent based the discussion on C64 development.


> I've always been skeptical of using retro machines to learn low-level programming.

> While the processors are simple, making non-trivial programs is hard, because the machines as a whole have lot of limitations, making programming very intricate, compared to more modern systems (say, 16-bit 80x86, but I guess even Amiga and so on).

Programming in those time was an art. Now some people have 32 GB of RAM and they are not able to use it efficiently.

> If the target it challenge for the sake of challenge, then nothing makes those machines special, I mean, one can code directly in machine code if that's the intention :-)

I've seen OS's that at 20 MHz and 4MB RAM did things that Windows, Linux or MacOSX cannot do today with 1000 times more resources. It is really a shame.


In the late 90s, I worked with a small team, about four of us, and ported a DSL soft modem from Windows NT running on a Pentium II running at 450Mhz to VxWorks running on an ARM SA110 running at 120 Mhz. The original software and hardware team that developed the Windows drivers claimed that it would be impossible to meet the processing requirements on that platform. In the end, under worst case line conditions (R-S correction thrashing like mad) we could still get it to run with the clock turned down to 60Mhz.

Because that was all the processor was doing. No virtual memory, no disk, no IO, no graphics. Just feeding the DSP data on one side and NAT routing on the other.

Under the same line conditions, the PII running windows was useless running at 450Mhz. Mouse would barely respond, keyboard was lagging, screen wouldn't update.

Like you said, depending on what you are trying to achieve, you can perform near miracles on confined hardware if you have confined demands.


Which is why game developers love consoles so much.


Smaller world, smaller expectations. I know people who started with Amigas and they went straight to C, no assembly. I started on 6502, skipped 16bit and went to ARM2. Doubt I would have learned assembly if not for the 6502, and the need to use it. The simpler the machine, the better. Sure it might be faster to get some quick wins if the computer has a blitter, but if the goal is to learn low level programming, then writing the blitter yourself is probably the way to start. The more extra bits there are on the system, the harder and less interesting it is to program for at a low level. Modern OSs are abstracted for a reason.


Smaller expectations, smaller programs, so you don't learn how to structure larger programs.

It's the same as learning on GW-BASIC or equivalent: You learn some bad habits due to the environment which you must unlearn the moment you move on to a real system.


Yes. That is how people learn. Or is teaching children arithmetic pointless because they won't learn how to simplify algebraic expressions? Learn the basics, then add more complexity.

The 6502 was a real system. Conversely, I'd say Visual Basic is a terrible environment and I have a few friends that made millions using it, so, not sure who gets to be the judge.


> And once You think you’ve run hw to its limits there are plentiful demos and examples around to blow your mind watching what people have been able to do with these machines (often in obscure ways).

What blew my mind was using a modern C++ compiler to build optimized software for that old hardware. Here's an example with the Commodore 64 [0].

[0]: https://www.youtube.com/watch?v=zBkNBP00wJE


Anyone using Turbo C++ on MS-DOS already had that experience. :)

Hence why microcontrollers like ESP32 are much more poweful than many think of.


Shameless plug: https://github.com/akkartik/mu#readme uses an instruction set designed in the 80s: x86.


I learned mips in a college class and quite enjoyed it. How does mips compare to C64 asm? I don't know any other asm besides mips (unless zachtronics games count) and but have been playing around with the idea of writing a simple asm OS. Would C64 asm be a good choice? I've also heard that m68k asm is nice. How do the three compare?


6502 has three 8 bit registers, A, X and Y. X and Y are used for indirect addressing and the like. MIPS has 32 32bit registers, or 64bit and maybe floats and doubles, and you can use any of them for anything. 68k is more like a bigger 6502 than a MIPS. Personally I prefer programming a RISC chip (like the MIPS or an ARM), but the choice to make is more around the complexity of the other hardware, than the CPU itself. For assembly, the Nintendo 64 was the sweet spot, IMHO, as it was the last one where we ran as the kernel and could do anything.


I haven't done any MIPS assembly. But 6502 assembly is very simple. The only tricky bit is the instruction set is very non orthogonal. m68k though is very cleanly octagonal.


MIPS is highly orthogonal.


I think it is more up to date to grab a Arduino, ESP32, Raspberry Pi, Hydra based console, and program it like the old days.


Does anyone know how to circumvent UEFI?

When the CPU starts, it will start reading instructions from a hard-coded address on the memory bus / EPROM somewhere, right? How can I directly control these bytes?

I don't want some proprietary firmware sit between me and the CPU.

If it's not possible on hardware because "secure boot", or whatever, this should at least be possible in emulators like QEMU.

Does anyone know how to do that? ... or clear up my misconceptions? :)


In QEMU it's dead simple, you have control of everything; I believe that it boots into https://github.com/qemu/qemu/tree/master/pc-bios

A physical machine will still, despite everything, start executing at FFFF:0000 in "real mode", and the code run there will be found in a physical EEPROM. Some of these are socketed (although this is less common these days). So you can get in there and fiddle with the pre-boot code.

See https://www.drdobbs.com/parallel/booting-an-intel-architectu...

There is no way round the Management Engine, a source of distress to some. Oh, and you won't have any DRAM until you've run the DRAM training and turned that on, the early BIOS gets to use the cache as a scratchpad instead. See https://blog.asset-intertech.com/test_data_out/2014/11/memor...

If you like bare metal work with decent processing power ARM is probably the place to start.


> In QEMU it's dead simple, you have control of everything; I believe that it boots into https://github.com/qemu/qemu/tree/master/pc-bios

Manpage currently claims, "QEMU uses the PC BIOS from the Seabios project and the Plex86/Bochs LGPL VGA BIOS." But it also looks like that's as easy to replace as passing `-bios` to qemu-system-


That looks exactly what I was looking for. Thanks!


Doesn't every ARM machine have its own initialization sequence?


Yes, but if you can find a chip with public documentation, it's a well-defined initialization sequence.

I learned assembly on TI's AM335x "Sitara" series and it was great, mostly because of the BeagleBone- it has onboard JTAG-over-USB, meaning you can single-step your way through your bare-metal code, set breakpoints, etc.


Unsure about the high end ARM machines. But an ARM Cortex boot consists of loading the program counter and stack pointer from ROM and go.


Yes, that gets the CPU running - but for practical work you usually need to do some board-specific setup like configuring clocks and turning on the DRAM.


For an Cortex M0 the clocks default to something sane. And RAM is static. One of my projects the reset vector is just the address of init function written in C. That does nothing more than copy the data section from flash to ram and call _start()

There is a bunch of peripheral set up but it can be done from C.


They're all cortex cores these days, you mean the Cortex-M series.


Below UEFI is the BIOS, or the firmware formerly known as the BIOS. There is a project to make an open source firmware for PCs: https://www.coreboot.org. It works on a selection of newish motherboards.

You can't really start completely from scratch in an understandable way on Intel platforms, and it's iffy on ARM. Because setting up the DRAM requires some insane magic, where it doesn't really help if you can see the source.


> Below UEFI is the BIOS, or the firmware formerly known as the BIOS. There is a project to make an open source firmware for PCs: https://www.coreboot.org. It works on a selection of newish motherboards.

This simply isn't true - while UEFI firmwares do offer BIOS emulation, there's no "BIOS" underneath them on most modern boards.


They might mean the "meta-bootloader" which brings up all those UEFI capsules- afaik Intel's boot ROM doesn't, for example, parse PE headers.


> Below UEFI is the BIOS, or the firmware formerly known as the BIOS.

Source?

As far as I am aware, UEFI is a replacement for BIOS.


I believe they're using BIOS in a more general sense. You're right that UEFI replaces the old BIOS APIs that bootloaders used, but there's still firmware (e.g. coreboot) below or part of its newer APIs.


> I don't want some proprietary firmware sit between me and the CPU.

No, you do.

A significant part of what that firmware does is initializing low-level hardware, like the system's memory controller. Replicating that is probably well beyond your abilities.


Seriously, it's not just a matter of poking a few registers - it involves doing a bunch of statistical measurements of memory errors to pick the correct thresholds on the data bus


And it's rather hardware-specific. Even if you have a solution which works for your CPU / motherboard / RAM, changing any of those components is quite likely to make it stop working. (And lord help you if you want it to work across CPU manufacturers...)


I mean, ideally it would be open-source and verifiable. But damn there's no way you're going to hackernews it in a weekend


This RISC-V board has about zero closed source code in its boot sequence:

https://www.sifive.com/blog/an-open-source-release-of-the-fr...


That is not really a thing anymore, most processors now have burned-in ROM from the vendor that they boot into first.

If you really want to understand and control a processor when it boots from nothing, you should look into a FPGA RISC-V development board.


Go check out the Coreboot project. They're about as low-level and bare metal as you can get, because Coreboot is not running on the board firmware, it is the board firmware. And as an open source project, they document all the various things they have to do in order to initialize all the hardware on a board and have it ready to be used.

You are generally correct in your assumption: that once the CPU comes out of reset, it will reach for a particular memory address to begin execution. Some will directly begin execution from a fixed address. A sibling pjc50 comment mentions, on x86, the CPU will be in 16-bit real-mode and begin fetching instructions from FFFF:0000. Other architectures, work slightly differently. Motorola 68k fetch the first 4 bytes from 0x00000000, loads them into the program counter register, and then jumps there to begin execution.

As you saw, the child of a pjc50's comment explains how to pass your code directly to the beginning of the CPU's execution in QEMU. If you want to do this with actual metal, various classic chips of yore (z80, 6502, 68k, etc) and their documentation are relatively easy to get. A nice thing about those older CPUs, is that their memory interfaces are rather simple compared to today. You can wire up a very basic system with the CPU, an EPROM, an SRAM chip, and maybe a couple other chips as glue logic, all on a basic breadboard. And then you really can control those first bytes of executed code, on actual metal.


Hardware initialization after power-up is HARD, here a high-level overview: https://depletionmode.com/uefi-boot.html


I really hate to be the downer here, but have you heard of Intel's Management Engine? :(


Well that one runs on a separate (albeit embedded) CPU.


Great question. Things have gotten ridiculously complex. You might look into https://wiki.osdev.org/UEFI


Not exactly bare metal when you've got UEFI in there.

In the embedded world, Bare Metal means you control the first byte executed by the CPU when it comes out of POST, and aren't using any kind of operating system or proxy loader in between. But it gets kinda fuzzy, because RToS is still "bare metal" and you have full access to the source-code.


That's not really bare metal either. You should be directly manipulating the electrons flowing through the material :)


Nah, electrons aren't quite full bare metal either. Mechanical computing gives you real bare metal!


Can't have metal w/o electrons. :)


If you want to go down a different rabbit hole than being "bare-metal" while still being in UEFI, take a look at EFI Byte Code: it's a strange and arcane little virtual machine that you can use to write cross-platform UEFI drivers. Here's a simple emulator for it: https://github.com/yabits/ebcvm


I recommend anyone just starting with bare-metal assembly to get an STM32F0 board and write assembly programs for it. I'm just gonna plug my super small toy RTOS I wrote for Cortex-M0 a while ago.

https://github.com/kahlonel/rt0s


CubeMX is the best config manager I've ever used, the clocking interface alone should be mandatory for all embedded companies.

I have still yet to really need an RTOS, even for BLE.


Just an aside, but it always felt weird to me that we call is bare-metal while the actual code runs on mostly semiconductors.

I am very glad that in my bachelor program our microcontrollers class actually made us hand enter hex codes in a kit. It got tedious after a while (maybe it should have been for only a few weeks not the whole semester), but it gave me a weird sense of being one with the machine. And it has as awesome ice breaker when talking with older programmers. For some of them I am the only one of my age that they have met who has ever done this. Another thing is it helped me sort of see the flow of it and encouraged optimization.

(I don't want to give too much credit to my college, they did it not as some great pedagogical trick but to save money and laziness)


Bare metal comes from mainframe days. The largest systems like IBM 360-67 and others were called "Big Iron" - see https://www.ibm.com/ibm/history/ibm100/us/en/icons/system360... for NASA's moonshot-era Big Iron. You could run programs on OS/360 or you could, if daring, run code directly on the machine, on the "Bare Metal". It's quite common to run 1-card programs on the Computer History Museum's IBM 1401 --- which was not really considered 'Big Iron' at the time, although today we call it a 'Mainframe' https://computerhistory.org/exhibits/ibm1401/


Interesting. But why were they called big iron? It doesn't seem to be in IBMs marketing material (at least I couldn't find it) and the wikipedia reference points to a 2005 vulture article where they call it the big iron[1]

[1] https://www.theregister.co.uk/2005/07/20/ibm_mainframe_refre...


I think it might come from auto body shop terminology. We would remove layers of paint or rust and get down to the "bare metal" for prep work. That "down to the bare metal" phrase was very commonly used.

That is to say "no layers" or "right on the substrate".

Just a guess though.


We call the code that users can actually interact with (even code that's deployed to a web server on the other side of the world) "production" because that's what factories called it when they started producing something. It doesn't make much sense when you think about it either, but I also can't think of anything better to call it.


You're totally right. I think the intermediate step was that software houses producing software prior to online distribution becoming mainstream would refer to software as being 'in production' when it was sent to the factory for CD replication and distribution. Literally being 'in production' at the factory. I believe the term has persisted from this.


> That is to say "no layers" or "right on the substrate".

Ok, I feel very pedantic saying this, but the substrate is the thing that is mostly semi conductors :P.


This "silicon is not a metal" pedantry is a whole new layer of tedium. But if you want to take it that way: the data is mostly carried in the metal layers of aluminium, and the substrate is .. a substrate that's not doing very much.


If you really want to be pedantic then the data is processed and stored in flip flops and stored as charge in semi conductor dielectrics.

But don't take this so seriously, I work in semiconductor manufacturing and even I don't take it that seriously.


I've always thought:

All possible abstraction removed = bare metal


On x86/64 only the CPU vendors go down to that level. Well them and security researchers.


Would you exclude compiler/assembler developers ? Are they not using the bare chip capabilities? Is assembly an abstraction ?


On micro-coded CPUs Assembly is indeed an abstraction, because that is not what the CPU is running, hence why many mainframe manuals refer to Assembly as bytecode.

On x86/x64, the chips have long migrated to an internal RISC like architecture, with an Assembly => micro-op translation step on the decoding unit.


> Is assembly an abstraction ?

Yes, assembly is an (mostly fairly thin, on non-microcoded hardware) abstraction over machine code.


Assembly is an abstraction, as the others have mentioned, but many compiler people are aware of this as it affects performance.


My first actual programming class was on an Intel 8085 kit, and I have to say that hand-assembling on pre-printed carbon sheets was pretty cool. It has served me well in my career to have started at that level.


It would serve everyone well to start at that level. I write enterprise CRUD webapps, and learning these things helped me tremendously.

Unfortunately we now have a culture that views this type of knowledge acquisition as "gatekeeping"


Most universities and technical schools of enough quality do have Assembly programming classes, even if it is a kind of light introduction to the subject.


How so? Why would that be gatekeeping?


Not OP but I can venture a suggestion. There are a lot of people who believe that programming computers is fundamentally simple, and that programmers really only need to know about 10% or so of the things that programmers are traditionally taught. These are the people who will insist that things like algorithms and data structures are meaningless for most programming tasks: as long as you remember all of the Javascript keywords, you've got as much education as you need. So if you or I come along and say "you'll be a better programmer in any language if you understand assembler", somebody else will invariably accuse you of perpetuating an elitist system that prioritizes meaningless theory over actual practice (i.e. "gatekeeping").


My former business partner is in the boat you describe. He learns the bare minimum needed to do the task at hand and seems pleased with that. He still has the joy of discovery, but it's more about what he can do with the tech than any appreciation of the formalism. He's happily working as an indie game coder, and while he doesn't exactly have FU money, he's for the most part cleared the first milestone of making rent for the past few years.

I'm in the opposite boat. I've learned so much about the theory of computing that I almost can't program anymore because most of what we do today feels like a waste of time to me. It's all convention and application now, with so many barriers to entry that I feel like 95% of what I do on any given day is setup. The dreams I had for how computing might evolve and lead us to the endgame of AGI feel more distant to me now than ever before. It will likely happen through the biggest players, using whatever proprietary tech they come up with, and leave garage hackers in the dust. I don't have a good feeling about whatever artificial agents arise from that.

So there is a lot of survivor bias in programming today. I feel like Obi-Wan Kenobi, beat down by the industry, marooned on some distant planet. Meanwhile the youth subscribe to empire quickly, because all they see is glorious rewards. Seeing haggard old graybeards like me fall from such early potential makes them rightfully skeptical of the gatekeeping you describe, the adherence to the old religion of computer science.

Or I'm just full of it. I don't even know anymore. I wish I was part of something bigger again.


Its a strange field that doesn't have clear boundaries and constantly changes.

I keep getting back in my thoughts to an old carpenter who was making a truly wonderful kitchen in a strange corner of an old building. Non of the walls, ceiling or floor around it were straight and it had tons of weird niches. I ask him how he could attack such a problem with such confidence. I would have to spend days pulling my hairs just making a drawing. He said, carpentry is roughly 300 methods of which you only need 120 to 140 to do any job. The rest is just tricks that you don't really need but they are impressive to those who know the problem.

I keep thinking of that in programing context for some reason. Nowadays you just order a plug and play kitchen that fits exactly, a novice can ikea it into place, everything works and it looks fantastic. Programming will get there one day. Until it does it will just look really weird to the old carpenter. So you grind the wood down to Particle board, you glue plastic on it that looks like wood then it gets moist and you replace the entire kitchen? .....!


Yup. I feels you.

A few years into my dev career, I adapted to make maintainable stuff. Because I learned that in 6 months I'd have to fix my own bugs.

Now it seems most code is throwaway, one-off, write only.

I haven't been able to "let it go". I still obsess over making my code correct. No one else seems to care. They get rewarded for fixing their own bugs ("velocity!") which I mostly avoid. So my KPIs look terrible by comparison.


Things get better. Or, they did for me when I retired earlier this year :-)


I think it's largely people talking past each other. One group claims that deeper understanding is useful and the other group says "no way, you don't need deeper understanding to get into (e.g., web) dev!". Being useful doesn't mean "necessary for an entry level position in the highest-level subdomains of computing".


> necessary for an entry level position

I still can't figure out how programming computers (the most complex task a human can undertake) managed to become singled out as the only profession in the history of humanity that is simultaneously assumed by so many to be something you need only a cursory understanding of to be proficient at.


Yeah the claim that it is "the most complex task a human can undertake" is a little.... iffy. I mean, most of the times its really not. Most of programming is exactly as complex as most of civil engineering or most of plumbing. Most of a sufficiently mature field is usually not that complex at all because the most complex stuff is abstracted out. My civil engineering friends don't design bridges from scratch and my plumber doesn't threads the pipes themselves. They rely on industry standards which give them enough abstraction to be productive.

Now this is not to say on the frontiers of it its not very complicated. But so is every other field. Ever thought about plumbing a space station? Or designing a rapid deploy bridge? You can't compare frontiers of one field with the middle of the other.


> most of civil engineering or most of plumbing

both fields that have strict educational and credential requirements to practice professionally. Both strictly - dare I say it - gate-kept.


Informatics as well, Informatics Engineering is a protected title in many countries.

While the exam can be avoided if one isn't into signing contracts and taking legal responsibility in project execution, the order still has a last word to say regarding which universities are allowed to actually claim that they give engineering titles in Informatics.


[flagged]


Well, I don't know how close I am to getting yelled at for feeding a flame war here, but...

You described yourself as "a guy who taught himself programming starting with calculators in middle school by the way (TI-Basic => C via SDCC => Assembler on calculators and MCUs". So you've had, I'm guessing, a few decades to learn and absorb all this stuff, and you started from the bottom/most concrete layer of abstraction and worked your way up - just as GP is suggesting everybody ought to do (and which I'm agreeing with). By mastering each layer before popping up to the next, each one was simple and made intuitive sense, so it seemed easy the whole time.

Now compare your experience to that of the hypothetical stockbroker who's diving into React native app programming without even really understanding what a loop is. He has decades to go before he understands what he's doing! What's NPM? What's a text editor? Is that different than a word processor? I screwed up my initial install... how can I start over? Insisting that programming is really easy and it's gatekeepers adding requirements that make it hard isn't doing him any favors; he's going to feel like a complete failure.

Rather, being honest with him and telling him that software abstraction is layered, and each layer depends on the ones below it, and while most people work at the top layer, each layer has idiosyncrasies that impact the layers above it that you need to understand when something goes wrong so you can easily troubleshoot problems, will probably ease his anxiety about diving into this (yes, very complicated) field.

And for the record, yes, I absolutely believe that programming as a profession would benefit from professional licensing like medicine, law and accounting do.


> each layer has idiosyncrasies

You don't need to understand those. The practical side is this: You can ask someone who does understand. The less practical reality is that the people who created the layer didn't have what it takes. (skill, time and/or money)

> programming as a profession would benefit from professional licensing

The existing solution is really quite simple. You have every layer up to the wall socket done by licensed professionals. Then you let those without formal certification go wild plugging things in, wiring extension cords, lamps etc. When comfortable with that they can even lay some pipe, pull some wires in it, add switches and wall sockets (As long as the components are certified!) All the way up to the fuse box (and no further!) They can even wire their whole house provided someone with certification signs off on it.


Same reply now that I'm not rate limited...

> Now compare your experience to that of the hypothetical stockbroker who's diving into React native app programming without even really understanding what a loop is. He has decades to go before he understands what he's doing! What's NPM? What's a text editor? Is that different than a word processor? I screwed up my initial install... how can I start over? Insisting that programming is really easy and it's gatekeepers adding requirements that make it hard isn't doing him any favors; he's going to feel like a complete failure.

> he's going to feel like a complete failure.

Why? You're casting this accusation on this hypothetical stock broker. Why are they going to feel like a complete failure? How is this hypothetical stock broker any different than I was literally hitting random buttons in the programming menu of his father's calculator until he made some text appear? > you started from the bottom/most concrete layer of abstraction

No, I didn't. I started from BASIC, I only learned C because assembler looked like greek and I heard you needed assembler if you wanted those fancy graphics all the best calculator games had and I learned that C let you write assembler somehow without writing assembler.

If I had quit because I didn't understand half of what I was doing (I didn't) I wouldn't be a programmer today. I didn't feel like shit, I felt like a kid who had just found a candy shop, what great things were there for me to learn next? Sure I had doubts, but they were easily drowned out by any modicum of progress I made, and soon enough I learned that it's ok because _no one_ knows all the fundamentals.

Out there there's someone who's scoffing at this talk of "knowing assembler lets you know how computers work", they're ready to bust out explanations of instruction pipelining, speculative execution (for now), how memory controllers work, what the micro controller in your HDD and SDD is doing to let your assembler do any kind of useful IO without understanding how underlying media works.

-

And also, if the stock broker quits because they don't understand what they're doing, in what universe is telling them:

"Actually go and learn this other stuff, that's the fundamentals. Just realize has a much much MUCH longer feedback loop, there's orders of magnitude more surface area before you get back to the level you were working on and can produce something that you recognize as being on the path of making the app you wanted"

Going to get them to _not quit even faster_.

In my experience the best way to get people feeling empowered is to let them make something that they can envision being on the path to what they want to make.

Writing a hello world in assembler and having it show up in a command prompt is not going to get my stockbroker friend nearly as happy as writing a hello world in React and running it on his phone, because he sees a glimmer of how the latter gets him to his goal of an app.

You're ascribing individual behavior to a broad set of people because they don't agree with your approach to learning being in their best interests. That's what I have such a problem with.

Who cares if they don't know what NPM is, who cares if every time he screws up he opens App Copy (23) and deletes the current one.

If they're going to quit because they don't know what NPM is or what a text editor vs word processor, how is sitting them down and trying to get them to understand assembler going to ease their anxiety?

And if you feel that's hyperbole, even getting them to understand NPM past copy and pasting commands. You'd be amazed at the things people have built that help real human beings do useful things every day while never truly understanding how NPM works.

There's nothing wrong with embracing people having no idea what they're doing just running at the wall until it starts to stick. It's what I did, and I couldn't have done it any other way.


You don't need to be proficient for an entry level position by definition. Further, proficiency at programming is only tenuously related to an immersive knowledge of computer science topics. Further still, proficiency at software engineering is more about soft skills like writing readable code, managing projects, and collaborating with teams to ship large units of software. Having lots of knowledge of low level components or math is icing on the cake for the overwhelming majority of applications (much to my chagrin--I really like the lower levels and the technical nitty-gritty).


> You don't need to be proficient for an entry level position

You do need a fair amount of education, though - for every meaningful profession _except_ programming.


Seems like “profession” just is t a very useful or meaningful term. If you insist on using it, I might suggest that “programming” and “development” aren’t professions, while software engineering is; however, there are lots of SEs whose jobs more closely resemble dev jobs.


the only profession in the history of humanity that is simultaneously assumed by so many to be something you need only a cursory understanding of to be proficient at.

This can be traced to the 1970s.

Computer Science is only half of a field. Semi-arbitrarily splitting it off from EE harmed both fields. Instead of one complex field, there are two very shallow fields.

I still can't figure out how programming computers (the most complex task a human can undertake)

Neuroscience, making pizza (or most cooking, really), marketing, high-speed motorsport, psychology, most weapons development (outside of guns, which are fundamentally simple), writing mass-market books and drug development all seem to have programming beat in terms of complexity for what ninety-nine percent of professional programmers do.

Jobs worded it well in an interview at one point. Something along the lines of, "I knew there was a market for people who would never be able to design hardware or put a kit together but who still would love to write their own software," in the context of why the Apple II was successful. It works just as well to show why the field is the way it is.

Most computer programmers don't have a clue how the hardware works. That used to be an essential part of it. It's not any longer. The bar has gotten lower and lower, and that's not necessarily a bad thing. Python can be learned in an hour, so why not? They can still make useful things, so there's no problem with it.

Just like in any other field, the bar for "proficient" is low compared to average, but the bar for "exceptional" is high.


I wouldn't even say the bar has gotten lower.

The bar has multiplied into multiple bars all at the their own levels but in different dimensions.

You can have an innate understanding from a single transistor to how individual frames are handled by an ethernet PHY to how the packet scheduler will interact with your userspace app and be an "exceptional" developer.

But the moment someone asks you to write an Android app that hits and endpoint and displays the data with some formatting none of that matters if you've never written an Android app.

Nothing is exceptional in a vacuum, and "programming" is so vast with so much space between disciplines it might as well be a vacuum.


I think there are extremes here, and you and 'commandlinefan are on opposite sides of the extreme. I'm somewhere in the middle, in that I agree with neither of you.

Writing an Android application is not something that requires anyone to learn an entirely new skillset. At most, it's a programming language and a toolkit of difference for any experienced programmer.


...

How is learning a new programming language and new toolkit not learning a new skillset???

Even in a literal sense of the two words:

Skill: "a particular ability."

Tool: "a piece of software that carries out a particular function, typically creating or modifying another program."

How is learning "a piece of software that carries out a particular function" not "a particular ability."

-

In a non-literal sense, learning to write an Android App is a new skill if you didn't know how to do it before. Like you could happen to have Java experience so it's less new to you, but either you knew how to do it before, or you didn't and now you do... so you learned how to do it.

Are we literally at the point of gatekeeping what it means to learn how to do something??


How is learning a new programming language and new toolkit not learning a new skillset???

Riding a Mongoose bike is not substantially different from riding a Schwinn bike. Riding a pink bike is not substantially different from riding a green bike. Writing Java is not substantially different from writing any ALGOL-derivative.

It's like saying writing a program for FreeBSD requires a different skillset than doing so for NetBSD. It doesn't.

Not everything is gatekeeping, and it's disingenuous to claim so.


> Riding a Mongoose bike is not substantially different from riding a Schwinn bike. Riding a pink bike is not substantially different from riding a green bike. Writing Java is not substantially different from writing any ALGOL-derivative.

But riding a unicycle is somewhat different than riding a bicycle, even though they look a bit similar and even have some of the same features. You’re drastically underestimating the amount of time it takes to write a new language proficiently: perhaps you’re confusing the ability to read a language from writing it?


Haha oh god, in one fell swoop you’re acting like writing Android apps is just knowing how to write Java and writing Java is just like writing any Algol-like.

You literally reduced language design to changing colors on a bicycle.

Are you joking?

You have no idea what Android is if you think it’s development process vs Java (which Java? Embedded Java, desktop with JavaFX? Server side?) is like FreeBSD vs NetBSD.

Maybe more like FreeBSD vs Windows and you’re trying to write a UI application, but technically you can use C on both platforms so it’s the same right?

Thanks for the chuckle in these dreary times...

By the way do you actually think syntactic differences are all that separate languages so once you know the general syntax you pretty much know the language, or are you pretending to not know how programming in multiple languages actually works to prove a point?


Haha oh god, in one fell swoop you’re acting like writing Android apps is just knowing how to write Java and writing Java is just like writing any Algol-like.

A: I've written a couple.

B: Anyone who knows any ALGOL (exception: 68), Pascal, Oberon, so forth, will get Java in minutes.

You have no idea what Android is if you think it’s development process vs Java (which Java? Embedded Java, desktop with JavaFX? Server side?) is like FreeBSD vs NetBSD.

Maybe more like FreeBSD vs Windows and you’re trying to write a UI application, but technically you can use C on both platforms so it’s the same right?

You're demonstrating reasoning flaws, alongside misinterpreting my words. Writing an application for, say, NeXT, back in the day, isn't different at all from writing a modern Mac application. What you do carries over. It's the same basic steps every time. Write a desktop Linux application, write a desktop Windows application, write a Mac application, write an iOS application, write an Android application. You'll have to use a few different wrappers or libraries, but it's not a new skillset.

You claimed that learning a new language is giving yourself a new skillset. It's not with the majority of languages, especially languages like Java, which introduce little compared to their immediate predecessors outside of syntax changes.

By the way do you actually think syntactic differences are all that separate languages so once you know the general syntax you pretty much know the language, or are you pretending to not know how programming in multiple languages actually works to prove a point?

This is a frankly ridiculous comment. ALGOL-derivatives grab more from ALGOL than syntax, and some don't borrow syntax at all. Any Pascal programmer can go from writing Pascal to Oberon to Java to ALGOL to Go with ten minutes per language, in any order, despite the differences in syntax. The languages are not differentiated strongly enough to matter; that C programmers could go to writing Java in the span of a day was a major selling point that Sun used, and C and Java are more different (though again, not that different) than any of the previously-listed languages.

Any J programmer can go from J to APL to K to Nial trivially as well, despite vast differences in syntax. Knowing one or the other doesn't mean you have a differing skillset.

The same is true for most Lisps (Connection Machine Lisp being an exception, as a counter-example; despite that, knowing any of these isn't a new skillset).

Just because something requires you doing something slightly and superficially different than what you were already doing doesn't mean it's magically a new skillset. Defining finding new libraries as "new skillsets" is just silly, and erodes the meaning of the term.


> Any Pascal programmer can go from writing Pascal to Oberon to Java to ALGOL to Go with ten minutes per language, in any order, despite the differences in syntax.

I don’t know if you actually believe this, or you’re defining write as in literally type letters that compile instead of being able to write useful productive code in each.

The rest of your comment is more wtfs kind of like that one.

This is not a productive use of my time because either you have no idea what you’re talking about, or you do but you’re intentionally throwing basic reasoning skills straight out the window and leaning heavily into playing games with semantics to support your point at all costs.

Now I’ll be charitable and assume the latter, but if that’s your goal, then what more is there to say?

Yes, Android vs Java is a pink bicycle vs a green one. Pat yourself on the back for that revelation.


The Java runtime ships with some 17,000 classes. Ten minutes leaves you prepared to poorly reinvent thousands of wheels.


[flagged]


> they find programming computers simple because they're not really programming

Actually, the "learn javascript in 24 hours" crowd finds out the hard way that programming computers is orders of magnitude harder than they initially thought... but they still insist that it's because gatekeeping programmers are artificially making it harder than it has to be.


When I was a kid, I started with BASIC, but I thought of "real programming" as being compiled languages, particularly C and assembler. Then in college I had a course where we wrote microcode for an imaginary CPU. FPGAs weren't really a thing then, but I would have liked to take a course with those. Now after years of Perl and SQL, I've come full circle back to (Visual) Basic.

But I would say assembly language is generally easier than javascript. In most cases (maybe not x86) there's a limited number of operations you can perform, and straightforward conventions on how to do them. Everything is documented (yes I know you can quibble with that, but relatively speaking) and there aren't ten different ways to do something of which 9 are wrong. You aren't dealing with all the layers of abstraction that tend to leak through either.


Assembly may be easier, but programming in assembly is a lot more tedious. A simple expression like "y=7*x+5" is at least four instructions, assuming you have a MUL instruction. And string manipulation is just as bad, if not worse, than in C.


Two on Intel, assuming the values should go in registers:

  imul rY, rX, 7
  add rY, 5


I assume he's including the:

    mov rX, [...]
    ...
    mov [...], rY


It's entirely in the phrasing. "It would be beneficial to learn how the machine actually works" versus "you're not a real programmer unless ..."

Bootcamps demonstrate that not knowing assembly is not a barrier to earning a decent software engineer salary.


Maybe that is so, because teaching people assembly allows them to pull magic tricks on companies. An example I can think is Apple banning all dynamic code generation and execution on ios apps.


Search on twitter for "computer science" and "gatekeeping" and ask all those people, I dunno. I gave up trying to reason with unreasonable people a long time ago.


We also pogrammed the 8085 machine very recently(2015). It was a very weird feeling realising all the myriad of abstactions come down to basic load/store/arithmetic/logical instructions. Had fun implementing squaring a number whose result wouldn't fit in a register.

EDIT: It was 8085 kit, not 8086.


Astrophysicists consider any element heavier than Hydrogen or Helium a metal. :)

https://en.wikipedia.org/wiki/Metallicity


Hasn't always been semiconductors... or electronics at all, for that matter. That just happens to be the current majority physical implementation, where metal is a little scarcer than it used to be.


A couple of years ago, I was learning PDP-11 assembly on 2.11 BSD and enjoying it, but then the old textbook got to the point of system calls. I couldn't get anything working properly, so eventually I found something else to do. I did very much like it though.

Also, TIS-100 from Zachtronics (the assembly language game you never asked for!) I think made assembly type programming less intimidating.


There's a bunch of people doing bare-metal work on Ben Eater's two projects right now:

https://eater.net/


It used to be way easier. Something like this:

    debug hn.com

    a 100
    mov dx, 200
    mov ah, 9
    int 21
    mov ax, 4c00
    int 21

    a 200
    db "Hello, World$"
    
    w 300
    q
Replace 100 with 0 and write it to the first sector of a disk and you had a bootable program (BIOS interrupts only, of course).

Edit: Geezus. It's just an example of how accessible getting something running in assembly language was compared to all the qemu, UEFI stuff in the article.


int 21h uses ...

A BIOS CALL. :)

Still not bare metal.

:)

(I'm totally gatekeeping for laughs: write your own BIOS you noob!)


INT 21 is DOS, not BIOS.

You can still use INT 10 for BIOS video services.

Or just write into the text framebuffer directly; it's at B800:0000 in colour modes and B000:0000 in monochrome modes (it's surprising how much I still remember from when I exclusively did x86 PC Asm, ~3 decades ago.)


Err I don't know... Didn't the bios set the video controller up for you? Not bare metal enough, I'm afraid. /s


On a random, but interesting note though, manuals from some of the early IBM PC's have the BIOs source listing in them


I had a giant pink book of PC BIOS that was my bible from 1988 to 1994 before the internet took off. I can't for the life of me remember who published it, but it had everything you needed to know about PC BIOS and an old IBM bios source listing. It was over 600 pages.

It look liked this 1200 page book, but this isn't it (also a book I used):

https://www.amazon.com/PC-Interrupts-Programmers-Reference-T...

I also had the Apple ][+ ROM listing in one of those spiral-bound books from apple circa 1980 but that too has been lost, sadly.


Sounds like the The Programmer's PC Sourcebook (which is on my shelf, and yes, one could say it has a pink cover).


YES! That is exactly it. I wish I saved mine for nostalgia.


We had something like this but it was a four-inch thick printout on green-bar paper and nobody knew where it came from.



Wasn't int21h a call to MS-DOS functions? (Also understood by Windows)

I'm not sure it would work if you stuck this in a boot sector.


Don't remember (and can't be arsed to check) if it was int21, but IBM PC BIOS does have a call to write text to the (B8000, usually) screen text buffer, complete with scrolling.


I do remember (and don't have to check [1]) but MS-DOS used interrupts 20h through 2Fh (mostly 21h). BIOS used interrupts 10h through 1Fh. 10h was for text and graphics. The "text" screen could be at either B000:0000 (physical address B0000) or B800:0000 (physical address B8000) depending on if it was monochrome or color, so to be pedantic, one had to make a few checks to see what card was installed (if you wanted to bypass the BIOS).

[1] Why do I even remember this stuff? I haven't used any of this since the early 90s! Sheesh!


DOS used a bunch in from 20 to 2F, but mostly 21.

IIRC, BIOS was 10 and 13.


I'm just tickled that someone can post a snippet of x86 assembly on HN and several people start pointing out perceived issues with it.


I would like to point out that your bio is a crime against humanity.


If this interests you I suggest Michael Abrash's Graphics Programming Black Book.

I have a copy I'd sell, although you can find it for free, legally, online. It's about a thousand pages....



You can explore the same world easily in AVR, ARM M0/4, or PIC. And really be running on bare metal.


Does anyone know if qemu and OVMF can be made to work in plain text mode? I just tried but qemu wants GTK by default, and when I pass -curses I get a message that the screen is in 640 x 480 graphics mode. Is UEFI really that complex that it doesnt allow for plain console access?


Try '-nographic'. It can be a bit wonky IME, but good enough for the basics.


If you do this (or -serial mon:stdio and leave the VGA output), you can do console I/O via com1, and it works pretty well. As a bonus, this is viable on real hardware too, although most consumer level motherboards don't do serial consoles :(


Thanks, -nographic did the trick.


At first I thought this would be a story about DIY metalwork projects during quarantine.


I always wanted to do this stuff, but classes just never taught it. I think some of the ECE students got some classes which taught some assembly, but not us lowly CS people.


No kidding? When did you go to school? When I went back for my master's degree in 2005, they made me take a "leveling" undergraduate x86 assembler course because my 90's-era S/360 (yes, in the 90's) was too out-of-date, but they definitely required at least one full assembler course back then.


Well, assembly doesn't have much to do with computer science. The fact that aspiring programmers are taking CS degrees is the root of this problem =/


It used to be standard to teach a Computer Organization course, which would involve some assembly programming, even in a liberal arts CS program.


Sure it does, the problem here is the existence of CS degrees that don't teach programming properly.


FYI if you're trying to follow along - it appears that QEMU doesn't (for some reason) run on a headless workstation. You'll have to have X installed.


You can run qemu without the graphical UI with `-nographic` (serial port output to the console) or `-curses` which can show the vga text output on your console.


Ah - thanks. I wish I'd known that before I installed the full X distribution.


Any instruction to run this under macOS. Qemu can run of course. But ovmf.fd (or install from edk2) seems no clear instruction under macOS.


Is there another level of assembly than bare-metal?


Love playing with Assembly and bare-metal :)


Why are assembly related topics so popular on Hacker News? I don't see pursuit of this topic happening in my anecdata of programmers I interact with in day to day life.


Partly curiousity, also I think there's a lot of nostalgia.

Back when I got started with computers it was 1982, and the available programming languages were BASIC and Z80 assembly language. A lot of people growing up around that time did assembly language coding, because they needed the speed, or because (like me) they wanted to hack games for either infinite lives, or to ease copying.

Of course assembly language is still used in some areas, it used to be necessary for optimization a lot more than today due to evolving general purpose hardware. But it is still useful for embedded and microcontroller programming.

I don't often touch assembly these days because I have a need for it, but do it for fun because it reminds me of simpler times and fewer abstractions. e.g. a recent project:

https://github.com/skx/math-compiler

(Of course these days with microcode, and complex processors, it's not really bare-metal anymore. Not like it used to be. But try telling kids today that, and they won't believe you !)


> Partly curiousity, also I think there's a lot of nostalgia.

I think it's deeper than that. I started coding professionally just before the internet took off. Before that I didn't have much interest in computer networks and seemingly overnight I was dealing with a huge new vocabulary of unfamiliar terminology like "port", "peer", "client/server", "protocol", "router", "bridge", "firewall". This wasn't just academic arcana either: whenever I had a problem, the network types would start asking me to check to see if the "firewall" was blocking my "client" from accessing the "port". I found myself able to muddle through by following their instructions, but I felt like a foreigner in a foreign land whenever anything didn't work the way it was "supposed to".

Web programming documentation wasn't helpful in navigating this territory; it assumed you either knew all this stuff of somebody else was taking care of it for you. One day I decided to start from "the beginning" and read TCP/IP illustrated and within a few chapters it all came together and I found myself able to troubleshoot problems that sometimes even the networking "experts" couldn't resolve.

Learning and understanding assembler is like that: once you understand EXACTLY how a computer works, there's no mystery in whether you coded it wrong or you didn't understand what to expect.


> Why are assembly related topics so popular on Hacker News?

Because there are some engineers perhaps at ARM, Apple, Microsoft, Intel or Google who deal with compilers or operating systems and UEFI booting devices that you're probably currently running or using to either build software or even reply to this thread, who still take interest in reading about blog posts like this.

> I don't see pursuit of this topic happening in my anecdata of programmers I interact with in day to day life.

Maybe not for you, but it's the reason why you are able to post your message here and browse the web or build software faster thanks to the engineers who did this sort of programming. Sure, general end-users shouldn't care but it's gives me confidence that there are some engineers out there who understand some OS internals to 'make' things happen even at FAANMG companies, rather than doing generic web apps all day long.


Came here to say just that, but to also mention I know some web developers who love these topics and are ahead of the game compared to their colleagues.

P.S. And if a low-level programmer goes into web, stuff like this [0] happens :-)

[0] https://nira.app/


I'm just your generic line fullstack web engineer. I'm still interested in these kinds of stuffs.


Why are webdev related topics so popular on Hacker News? I don't see pursuit of this topic happening in my anecdata of programmers I interact with in day to day life.

Oh right, not everyone is in my field.


Answered in the very first sentence of the article

> Seeing a program you wrote running directly on the bare metal is deeply satisfying to anyone who enjoys writing software.

This is it. I have been on this site for over a decade now and most the obscure stuff is popular here because it satisfies this criteria for some people. Its actually different for different people. For some its bare metal programming, for some its obscure languages like K, and for even others its some libraries.


Anyone doing compilers, graphics, video or audio codecs, GPGPU, kernel development or OS drivers needs to know Assembly.

Maybe you don't do any of this stuff, but there are plenty of engineers that do this daily.


For me, the desire to better understand the innards.


Knowledge of assembly is also extremely important for computer security related topics such as reverse engineering, malware analysis, and exploit development.


Intellectual curiosity?


Some of us still write assembly now and then, even if it is for embedded systems and not x86. I found this interesting!


> programmers I interact with in day to day life

What kind of dev are you, and what kind of dev do you usually interact with?


Agreed, I think it's somewhat industry-dependent. I work with FPGAs and while we don't deal with assembly on a regular basis I'm confident everyone on my team would be able to develop or debug it.


Did you coin the term 'anecdata'. I really like it.


I doubt that they did - its been a term since at least the 1980s

https://www.lexico.com/definition/anecdata


Most people don't do this in real life.


sadly.


I am not sure if using heavily microcoded CPU instructions qualifies as "bare metal programming."


risque-metal programming maybe?


Touche




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: