This is pretty damn cool, but unfortunately you failed the interview as you accepted the challenge to use x86 assembler, but solved the problem using a different programming language from the one we asked you to use. We'll keep your resume on file, and if there are any openings in the future we encourage you to apply for those.
For those who don't know, that's why big and little endian were called that, because the debate was so frivolous. It's a reference to the book Gulliver's Travels by Jonathan Swift in which an island folk was split about from which end you should crack a boiled egg. (I'm a big endian for example).
Try to find (or even better, write) a big-endian arbitrary-precision arithmetic implementation if you're convinced that the difference is frivolous. You'll easily see why one is sometimes called "logical endian" and the other "backwards endian".
I used to be big endian. Then i changed my mind. Have a look at the following:
-123
-12
-1
Here are three numbers. If we read them from left to right, they all begin with 1, but we cant tell what the 1 means, in one case it means 100, in one it means 10 and in one it means 1. We need to parse the entire number to know what the first number means. If we parse from right to left, each number always means the same thing. the first is how many once, the second how many 10s and so on.
So it makes sense to store the smallest first. In a big endian architecture, if i want to read an 8, 16, 32 or 64 bit word each byte will always mean the same, if we pad out the type with zeros. So little endian is right, and Arabic numerals are wrong.
Indic scripts are LTR, and so stayed the big-endian numerals when they were borrowed by medieval Islamic mathematicians. Numerals in modern spoken Arabic are big endian (mostly, with an exception for the teens that also exists in English).
Watch as every pitchfork gets pointed at you when you talk about middle endian.
Which is a real thing. There are systems that would store, say, a 32-bit word as two 16-bit words that were big endian relative to each other, but little endian within the 16-bit word.
except now that everything depends on the internet, and words that go over networks are big endian, it seems insane to throw away millions and millions of cpu cycles every year converting them to little endian to be processed by our little endian cpus. sure, it's a single cpu instruction, but between every computer in the world, almost all of them being little endian arm or intel, that's billions and billions and billions of instructions wasted.
Assume RAX points to the root node and nodes just contain two child pointers and everything is aligned and whatever.
:invert
cmp rax, 0
jnz swap:
ret
:swap
push rax
mov rax, [rax]
call invert
mov rbx,[rsp]
xchg rax, [rbx+8]
mov [rbx], rax
call invert
pop rax
ret
The last time I used an assembler was before x86-64 was invented, I am not even sure I ever used one in protected mode. But that seems a totally reasonable whiteboard interview question. Written in notepad, might not assemble. Might even be totally incorrect and I am posting it so that the internet generates the warning and error messages.
EDIT: After reading the article now, that seems rather inefficient to me, to use local variables on the stack for everything. And why is the function returning a node if it is mutating the tree in place?
The variables on the stack are the most efficient after registers, so you are right that a local variable should be kept into a register if possible, otherwise in the stack, and only then in other places (e.g. if it is too large for the stack).
However, when writing in assembly one must pay attention that at least RBX, RBP and R12 through R15 must be preserved by any function (on Windows also RDI and RSI must be preserved).
So in your code you should not use RBX, but a volatile register, e.g. RDX or RCX. If you would insist on using RBX, it would have to be saved and restored.
However, when writing in assembly one must pay attention that at least RBX, RBP and R12 through R15 must be preserved by any function
Only if you're calling external code that assumes that. The power of Asm largely comes from not needing to follow arbitrary conventions in your own code. The boundaries where you interface to external code are the only constraints.
The top of the stack should generally be already present in the cache, so stack memory would be faster than heap memory where objects aren’t necessary as close together or accessed as frequently.
I am not familiar with any other reason besides the fact that stack-pointer arithmetics are done through a dedicated HW, called stack-engine, which sits in the CPU frontend. This effectively means that stack-pointer uOps are going to be get executed quicker because they do not have to go all the way through the CPU backend processing. This also saves some of the bandwidth on the CPU backend side allowing other uOps to be processed sooner.
I know none of the calling conventions in any detail anymore and just used the registers in alphabetical order. Totally expected that this would violate something.
This doesn't actually invert a Merkle tree though, since you have to recompute the hashes (except the leaf hashes) when you invert a Merkle tree. Gonna be a no-hire evaluation from me dawg.
The tweet this is based on is a joke. To invert a Merkle tree would mean to invert cause and effect. I’m pretty sure the tweet author is implying they want you to find a hash key collision for each node. Hope you have a couple spare universes in your pockets because this is gonna take a while.
Inverting a binary tree is easy; express the tree as a matrix (laplacian), invert the matrix, then convert that back to a tree. What the canonical question is asking is not inversion.
Since too many people were memorizing inversion, I switched to asking how to evert a binary tree. This leads naturally into a discussion of the 1:1 relationship between complex numbers and cohomology sets, I figure if somebody can get that right, they can be a junior programmer on the team.
I don't get why this one is the meme. Just because it's recursion? Because it's (nearly) pointless? There are so many other algorithms I find more difficult/more tedious.
That solution is terrible, with a bad algorithm that requires O(tree_height) space (the optimal one involves temporarily using left/right pointers as a parent pointer so that you only need constant space) and lacking any sort of assembly optimization, being worse than what a compiler would produce (e.g. it's a real mystery how the author managed to decide that local_right should be spilled on the stack).
Definitely not what you want to submit to someone testing your programming skills.
You're alluding to using the Morris traversal algorithm which can traverse a binary tree in O(1) space, but Morris traversal is actually much much slower than using a stack, especially as is used by this algorithm. Doing a Morris traversal requires at a minimum twice the number of operations as using a stack, and due to its cache unfriendly nature will in practice be closer to 4x slower.
You typically only use Morris traversal on exceptionally large trees, and by large I mean when working with data that lives on a disk. It's definitely the exception, not the norm.
I was thinking of this from the perspective of CPU pipeline pressure, but in reality it seems prosessors are indeed smart enough to avoid burdoning the ALUs execution with these kinds of special cases.
The xor reg, reg is also special cased because it's a quick way for compilers to reinitialize a register and indicate that future uses of that register don't depend on any previous operations. It helps the cpu to parallelize any future instructions that use that register since the cpu knows that those instructions don't depend on anything that happens before the the xor.
It’s not special cased because it's a quick way for compilers to reinitialize a register and indicate that future uses of that register don't depend on any previous operations, it’s special cased to give compilers a quick way to reinitialize a register and indicate that future uses of that register don't depend on any previous operations.
For learning assembly, usually you learn the syntax for your assembler. The rest of it (the majority of it) is then learning what instructions are available on your platform.
I liked http://rayseyfarth.com/asm/ as an intro to both. I'd already had a class on computer architecture that did assembly before that, though.
Once you get going with that, you can download and read the Intel or AMD programmers manuals. Of course, this assumes x86_64.
It's not only that either. OP used rdi because of calling convention, in x32 you would need to arguments on the stack instead and reference them +ebp instead of using registers if you want non-asm code to call your function.
Being able to read and understand x86-64 assembly (or PTX/SASS for an Nvidia GPU) is much more important than being able to write it. In practice, even when you're writing assembly, you're looking at reference assembly generated by a compiler from C code you wrote.
Similarly, the reality is that as a professional programmer you spend no time doing work like leetcode.
Instead, you spend a lot of time understanding and slightly modifying (fixing or enhancing/extending) code.
With the rise of language model code completion systems (e.g., Microsoft Copilot) even more time will be spent inspecting and understanding code to find problems.
With these facts in mind, I have been building a new form of leetcode:
Most puzzles are interesting algorithms that you will learn useful techniques from, so it's never a waste of time to think about them. And even though the bugs are all quite trivial, I can see it's very challenging for many people.
It's about half-way ready to launch, needing 30 more puzzles. I am working my way through Knuth's The Art of Programming Volume 4B and today I'll see if Algorithm X (Dancing Links Exact Cover Backtracking) can be made to fit for Bugs 38 and 39 (or whether it's too complicated).