All of my previous streams are archived here: https://www.youtube.com/channel/UCaV77OIv89qfsnncY5J2zvg
I have around 23 videos now on YouTube covering both ARM 64-bit and x86 assembly language.
I have two projects on the schedule that are 100% assembly language:
- Let's Make an Arcade Game in MS-DOS: 100% x86 assembly language. I use DOSBOX and period tools for this project.
- Arcade Kernel Kit: 100% ARM AArch64 assembly language running on Raspberry Pi 3.
All of the code for these is available on Github: https://github.com/nybblesio
I'm also working on a game engine called Ryu: The Arcade Construction Kit where I'm writing my own assembler for classic arcade CPUs.
I'll be working on the x86 project again starting 2 April through 7 April.
However, with all that said, I am producing non-stream content that will start airing on my YouTube channel soon (within the next 30 days). This content will be more structured and focus on a specific topics with a set lesson plans. These videos won't feature any background music. I hope you'll be able to enjoy these when they're available.
Knowing that the music is there for your focus actually makes it a lot easier for me to focus as well, somehow. Glad to know that.
Your voice is very relaxing, I'm actually working while I watch! Great explanations also.
I appreciate the hell out of people like you who take the time to spread their hard-earned practical wisdom. <3
a86 works great under DOSBOX but, sadly, d86 does not. I'm stuck using Turbo Debugger. Which, as tools of the time go, wasn't at all shabby and DOSBOX emulates it very well.
According to the DOSBox debugger, it gets stuck in an infinite loop poking/polling interrupts:
278407: CPU:Illegal/Unhandled opcode 63
278408: CPU:Illegal Unhandled Interrupt Called 6
278410: CPU:Illegal/Unhandled opcode 63
278411: CPU:Illegal Unhandled Interrupt Called 6
278413: CPU:Illegal/Unhandled opcode 63
278414: CPU:Illegal Unhandled Interrupt Called 6
278416: CPU:Illegal/Unhandled opcode 63
(Based on the fact that D86 was last updated in 2000 I doubt the author is going to be too interested in tinkering with it.)
D86 works just fine in QEMU - and, even better, if you use a CPU idle program for DOS to make QEMU not chew 100% of one core, the idle program will continue to have an effect even while D86 is running.
This being said, I have no idea how to use D86 :) and so cannot say whether an idler program will impact anything.
I tested with the IDLE.COM in "VMAdditions.iso" (date 3 Aug 2004, cksum 281796710). The program is so small (128 bytes FTW) that I see no issue with just
so you don't have to go index-of-/-hunting (you can just `base64 -d > idle.com` instead). I don't think whoever wrote this will mind :P
(Deliberately not using monospace for the block above to make things less visually jarring)
My guess is that the author of D86, Eric Issacson (http://www.eji.com/), made use of some of his inside knowledge from his Intel days. Specifically, I know he liked to use encodings of the AAM instruction with bases other than 10. Intel didn't document the opcode properly so history records it as only supporting base 10, but it in fact can support quite a few number bases. Anyhow, I would speculate that DOSBOX doesn't support such a flexible interpretation of the encodings and that's why it's throwing illegal instruction exceptions. Maybe I should patch DOSBOX on stream sometime and submit the patch to the maintainers.
I continue to be amazed at the amount of backward-compatibility inside the average Intel x86 CPU. If something stays out of long mode, it can still do all of that. Impressive, really.
And... I hadn't thought through to the point of considering what you were using D86 for :)
I agree, things like sound emulation leave a lot to be desired. I only use QEMU for its networking and HW-accel virtualization - DOSBox runs rings around it with eg Win3.1...
Have you tried PCem and 86Box?
> Maybe I should patch DOSBOX on stream sometime and submit the patch to the maintainers.
Now that would be really awesome. :D
Lately I have been writing a lot of code in x64 asm for fun. I've found that it's impressive how much functionality you can fit in such a small amount of space. Randall Hyde's book got me started but I think that a much better introduction would be Paul Carter's PC Assembly Language https://pacman128.github.io/pcasm/
Chips are more complex, more features etc. Things pile up for compatibility. That is a good and a bad thing.
As for an ISA, it's not clean but there's only so much you can do to an assembler language, so it's not that bad. And the docs are pretty good.
(If you're following Paul Carter's book above, use something that supports protected mode, like a 386.)
The TMS34010 was something like a CPU and 2D graphics processor combined. It has the ability to change its word size from anywhere between 1 and 32 bits with data addressable on bit boundaries. It's wild.
The Genesis port ran on a 68k of course.
; eax = rbx[rcx] where rbx points to the base of an array of dwords
mov eax, [rcx*4 + rbx]
; => 8B 04 8B
; where the first 8B is the opcode for this version of mov, 04 is the modrm byte saying
; to mov the value into eax as well as that there will be a sib (scale/index/base)
; byte following specifying the data source, and then 8B specifying everything in the 's.
; If we were moving into ecx, the modrm byte would be 0C because a the 3-bit
; register specification portion of modrm would change to specify ecx.
Another example might be:
lodsd ; eax = *rsi++ : "AD" (one byte opcode using implicit src/dst operands)
shl eax, 2 ; eax *= 4 : "C1 E0 02" where C1 specifying shl instruction
; E0 specifying eax and immediate constant
; 02 the constant to shift eax by (this is a mul by 4)
stosd ; *rdi++ = eax : "AB" store into destination (one byte opcode)
 http://www.felixcloutier.com/x86/LODS:LODSB:LODSW:LODSD:LODS... (there are others)
They have this representation whether you're using assembly or not... it's just that if you're writing assembly, then you have to pay attention to their blobby nature. If you're writing in a higher level language, then that language will do the work for you of translating, say, my_object["my_key"] = my_value into the assembly instructions that deal with indexing into your giant array.
At least on PC/Amiga/Atari world, macro assemblers were quite powerful, so you could kind of invent your own "C" using their macros.
You iterate by address, so on a 32 bit system 4 bytes at a time. Then just reference whatever that address is pointing to.
Yes, that is roughly it, although there is more to the topic, related to arrays and pointers, etc. In fact C's way of doing it is pretty low-level too and similar to assembly - it just adds the address of the origin of the array (i.e. the array name. which is a pointer to the start of the array) and the index times the size of the array element, to get to the memory location of the indexed item in the array.
A surprising point I read a while ago is that due to this, in C, the expression a[i] is equivalent to i[a], where a is the array name and i is the index (int variable). This is because C resolves a[i] to:
*(a + i),
*(i + a),
The Kernighan and Ritchie book "The C Programming Language" is be a good introduction to how arrays and pointers work (and their inter-relationship). (Seeing your profile, I guess you may have read it, but just saying.) I had read the original version and the 2nd (ANSI C) version. Not sure if there is an updated version after that.
In the versions I read, they explain arrays and pointers in C pretty well (although you have to do some work yourself, it is not dumbed-down teaching like some you see nowadays).
>What about hash tables, that seems even more confusing...
The K&R book had a nice simple implementation of a hash table in C too. The hash function (from memory) was something like just adding up all the characters's int values (in the string used as the hash key) and doing a division modulo some prime number (like 31). There was other stuff for handling collisions and buckets and so on. And that was of course only a demo version (though it did work), there are more advanced versions.
(Sorry for a few multiple edits, I was not familiar with how asterisks are used in HN as markup, so had to edit it a few times.)
Here are a few links for others who might not know about how they are used:
Google search for:
how to disable meaning of asterisk in hacker news
and from it:
I know this is what string literals in C translate to, but is it actually how anyone does it these days? I thought there was general agreement that a size field is mostly superior to an elephant in Cairo.
I feel like the hardest part of Assembly is all of the hex math required...
I understand when you use something along the lines of a disassembler that requires you to actually see the addresses of the values. But even these are sophisticated enough to give you symbolic values instead.
There are times where it feels convenient to express things in decimal and times where it seems better to use hex.
There are multiple reasons why it is useful or convenient to deal with data (whether memory addresses, data values, colors, etc.) in hex (including doing math on such hex values).
As I said in another comment in this thread, I have not done a lot of work in assembly language; but have done enough and read enough about it to know that it is useful, and not just a cool or retro thing that developers do.
The fact that two hex digits compactly represent a byte, the fact that one hex digit can represent a nibble/nybble , the fact that machine registers on devices attached to computers (such as printers, scanners, any other electro-mechanical equipment with a computer interface) are used to programmatically read and write data to and from those devices, and thereby get their status and manipulate and control them, are some of the reasons why hex and hex math and bit-twiddling in hex and binary is useful and still done.
Depends on what kind of applications you write in assembly (or even C), I guess.
Also see comment by khedoros1.
I knew about Randall Hyde's book from a while ago, and some time later had seen Paul Carter's book site. I reviewed it briefly and it did look good. Planning to work on x86 assembly language as a hobby, at some point when I have some more free time, using those two books. I had only done some 6502 assembly programming earlier on home computers (Commodore 64 and BBC Micro, mainly), but liked it. Apart from the basic programming in assembly, trying to optimize the code using various tricks, whether for speed or size, using bits, flags, alternative instructions or addressing modes or other techniques, is fun too. Of course it can get more frustrating than high-level languages when you need to debug the errors ... but still worth it, IMO, and even the errors are a good way to improve your deduction / debugging / programming skills.
Learning assembly and creating my own simulated processor using multiple layers of logic gates was one of the main cornerstones of my college education. I remember when things finally "clicked" and I finally understood how computers worked on a fundamental level, I was positively giddy for days.
Unfortunately, our technology has become increasingly complex. For a while I tried my hand at learning more about modern processors, but found it far too overwhelming. Maybe I just didn't know where to look for more approachable resources. Just taking a look at the reference manuals provided by Intel is enough to make anyone lose hope. I seriously doubt any single human is capable of reading and understanding that in any reasonable amount of time.
If you wanna go down into the metal, consider picking up a microcontroller and writing your own firmware. You can ditch the OS and start squeezing every ounce of power from the hardware. It's pretty crazy how little power a well configured microcontroller can get by with.
I'd also suggest learning WebAssembly! The text format is quite pleasant, and it does away with many of the issues you'd have to deal with when using real hardware. Even better, the spec  is actually quite readable for regular developers. That might let people get their feet a bit wet without having having to go into the deep end of the pool right away.
I worked through the entire textbook by myself, doing all the projects, and having no previous knowledge of computer architecture or compilers. It was a challenging but smooth process. The difficulty level was just right and I didn't get frustrated.
The computer you build is designed brilliantly. It's as simple as it can be, while still being a real computer can be that can play video games.
The second class is the one im taking right now on x86 assembly and its cool but definetly not as fun as building a computer from scratch.
If no ones has watched Ben Eaters videos I highly recommend them.
I recently decided to code again some stuff because I wanted to write some articles for the retro section of a magazine, and it was so pleasant that I coded some more and documented it (here for those interested, in french at this time, but there are pictures: http://www.stashofcode.fr/category/retrocoding/).
I hope people still have the opportunity to have a look at assembly language at school. The article is interesting. It reminds me also that I always heard that coding in assembly language was hard. Well, I would not have say that back in 1996. x86 was a pain in the ass because of the lack of registers and the way memory was managed, but Mx68x00 was very simple indeed. IMO, what was a bit difficult was that if you wanted to code assembly, you had to learn about how what was around the CPU did work.
And anyway I have this cool Apple II video codec to show for my efforts :)
Assume that glibc for your architecture is going to be hand tuned (and exceptionally highly optimized)
Read and try to understand the native mem() and str() routines, as the folk who write these routings DEEPLY understand the micro-architecture and instruction set.
A counter example: all implementations of strlen() which evaluate a single character at a time are completely inept if the architecture has vector comparison capabilities.
It's a lot easier to read that than the raw output from gcc. The highlighting between matching segments is nice, and the right-click menu has all sorts of goodies, like scrolling everything to the same place or showing the manual entry for some instruction. Plus, it's super easy to see the effect of switching flags or compilers.
Though to your point, apparently godbolt v0.0.1 was basically just:
watch "g++ /tmp/test.cc -O2 -c -S -o - -masm=intel | c++filit | grep -vE '\s+\.'"
I figured out how to write some basic stuff, then I learned to use perf to performance tune. Being able to read a little assembly was a revolution for me.
I also chuckled about how C is non-portable; omg C++ is so much worse. In some sense, x86 asm is more portable than just about anything now, sure you can't run it on ARM, but your C++ code won't compile on ARM either and you'll be in #ifdef hell for two weeks to get it there, at which point you'll give up.
The second one felt like "what's the point? I'd rather learn C".
On a related note, I learned first on an 8080 but with a different naming and syntax than the standard intel ASM. Maybe it's because that's what I learned first, but it still seems better to this day.
Note that I'm a heavy user of intrinsics and SIMD, so I'm writing a lot of code where I've pretty much already done instruction selection, often at algorithm design time.
Far as non-embedded, his company does:
Past that, it's still good for leveraging hardware features directly, highly-optimized code, and in high-assurance to be sure binary matches source. For latter, you can verify compiler output or just write it by hand to be verifiable.
There are even so-called typed assembly languages that are safer than C. TALx86 and CoqASM that build on x86 come to mind. SIFTAL addresses non-interference that combats things like side channels on top of obvious stuff.
I think assembly has only gotten less popular in a relative sense. Sure, there are now many jobs doing high-level stuff, but the low-level stuff hasn't gone away. It really can't go away, because all that high-level stuff is built upon it.
Otherwise I dont think Assembly is that difficult to learn or code.