Instead of each node storing the next- and previous-pointers separately, store a single pointer which is the XOR of the two. Which is obviously an invalid pointer. But when iterating, XOR the previous node's pointer with the combined pointer to get the next node's pointer, and so on. You can iterate this way in both directions. Feels illegal. :)
One thing this loses relatively to a conventional doubly linked list is the ability to remove an item while only having its address (or any other iterator pointing to it that is stable across other items being inserted or deleted). And unfortunately that’s often why you’re using a doubly linked list in the first place.
(A less intrinsic flaw is that coding an XOR linked list in strictly conformant C is exceedingly annoying. The standard does not guarantee that equal pointers turn into equal integers upon casting, so you’re forced to make everything into an uintptr_t to, essentially, maintain a canonical integer-cast version.)
> The standard does not guarantee that equal pointers turn into equal integers upon casting, so you’re forced to make everything into an uintptr_t to, essentially, maintain a canonical integer-cast version.
Of course! The standard does not guarantee that the size of an int is the same as the size of a pointer, i.e. `sizeof(int) =/= sizeof(int*)`. IIRC, this was the case on some of the PowerPCs I worked on decades ago. Now with x86-64 & Aarch64 having taken over the world (and with saner 32bit on the embedded end) we've almost forgotten the Cambrian explosion of "interesting" word-size machines.
The whole point of the C standard is that it allows implementers flexibility in the sizing of the basic data types, to match a given machine's architecture, with the standard only defining relationships between types (e.g. char <= short <= int <= long). The only surprise is that it took so long for fixed-width types to be standardized (C99).
I meant integers in general, not ints specifically. That is to say, the standard does not guarantee that, if p and q are (say) void pointers and p == q, then (uintptr_t)p == (uintptr_t)q. (Neither does it guarantee that, if p == NULL, then (uintptr_t)p == 0, but I digress.) Mere size shenanigans are not enough for things to get that bad. What you need is for there to be multiple ways to represent a pointer to the same place in memory, like on real-mode x86 in the large memory model.
The practical result is, you need to write XOR-linked-list operations something like this:
and you cannot for example make head, the sentinel node pointer, into a struct entry * instead, it has to be exposed as an uintptr_t (a canonical integer representation of that pointer).
Wow, TIL. So in the version unaware of this the list could be corrupted if the pointer you use, while correctly pointing to the head or tail of the list, happens to convert to a different bit representation for the XOR than the one you encoded into the node. Did you happen to ever see this in a real system?
I don’t have it in me right now to work out whether this can actually cause corruption, but as for equal pointers simply converting to unequal integers, sure:
$ cat mismatch.c
#include <stdio.h>
char abc[48*1024U], def[48*1024U];
int main(void) {
void *p = &abc[sizeof abc], *q = &def[0];
printf("%d\n", p == q); /* pointers are equal */
printf("0x%.5lX == 0x%.5lX\n", /* linear addresses are equal */
((unsigned long)p >> 16 << 4) + (unsigned short)p,
((unsigned long)q >> 16 << 4) + (unsigned short)q);
printf("0x%.8lX != 0x%.8lX\n", (unsigned long)p, (unsigned long)q);
return 0;
}
$ # compile&link for DOS, 8086, huge memory model, create map file
$ wcl -bcl=dos -0 -mh -fm mismatch.c
Open Watcom C/C++16 Compile and Link Utility Version 1.9
[snip]
creating a DOS executable
$ egrep '_abc|_def' mismatch.map
0371:0000+ _abc
0f71:0000+ _def
$ emu2 mismatch.exe
1
0x10080 == 0x10080
0x0408C000 != 0x10080000
(I said large memory model earlier. That was incorrect: if you compile for the large memory model, with -ml, the first line of the output will be 0, because then pointer comparisons will not canonicalize pointers. You need the huge memory model for that. Both 0 and 1 are OK according to the standard, as it does not guarantee anything about comparing a pointer one element past the end of one object to a pointer to another object.)
On modern 64-bit machines, sizeof(int) != sizeof(int*) is very true. But there's probably a significant amount of code that assumes it's equal to sizeof(uint64_t) or sizeof(long). :)
True, but if you always keep a pair of adjacent pointers where before you would have kept just the first, efficient deletion and insertion come back. (You could even go the whole hog and encapsulate this pair as a new WidePointer type, with operator++() and operator--() internally operating on both, etc.)
This will likely be a small constant factor slower for some operations, of course, but if you need bidi traversal and node values are small compared to pointers, it's a solid space savings -- and the lower cache utilisation resulting from that savings may even make it a speed win overall.
Storage can be further reduced if we think that, with a 64-bit processor, probably a 32-bit address space is enough for most applications (that require less than 4 GB of RAM).
Maybe we can go even deeper with 16-bit near/relative pointers. Perhaps data-oriented design fits well in this situation? With blocks of 64k elements and uint16 indices to address elements inside of them.
> with a 64-bit processor, probably a 32-bit address space is enough for most applications
This is related to the Java virtual machine's use of 32-bit "compressed ordinary object pointers (OOPs)" on a 64-bit platform. The pointer is 8-byte-aligned though, so it can address 32 GiB of memory. There is also a non-compressed-OOPs mode that can address more than 32 GiB.
I see that you're being silly, but the problem is conflating pointers and indices. 16 bit indices are fine. 16 bit pointers are terrible.
Separately, 32 bit pointers are a good optimization in many situations. Java will save lots of space by using 32 bit pointers until you go above 32GB or 64GB or more (depending on what you set as minimum object alignment).
A relative pointer with limited range is something that should rarely exist, and all too easily it can end up with a base that is no longer guaranteed to be adjacent. There's a lot of bonus landmines compared to just having a 64k element limit.
So essentially, you are using indices rather than pointers.
Indices have advantages: they can be more compact, and they are position-independent. But you need a base pointer, and a bit more computation. It also requires you to have your own allocator, as your typical heap allocator will just give you an arbitrary pointer to your object.
Re: data-oriented data structures... random (not XOR)...
I have been running multiple iterations of entire ipv4 space port scan. As of now only scanning top 25 used ports (nmap, masscan have toplists). I wanted to be able to do simple data analysis _very_ efficiently time-wise, being OK to sacrifice memory for that.
If you want (time-wise) O(1) item insert (into SORTED btw), fetch, lookup, and port status check to simply be a matter of bitshifting (similarly with counting), then:
1. Bitfield array to store status of up to 32 ports (1 bit for state (open/closed) => 32 bit bitfield)
2. ...that's it. Each scan result is to be found at
`bitfields[(unsigned int) ipv4_address]`
In C:
```
// bitfield for port status, for each IP
struct port_field {
bool p1:1;
bool p2:1;
bool p3:1;
bool p4:1;
bool p5:1;
// in C, gotta write it out - of course we could use a macro to generate this
...
bool p32:1;
};
```
This will use 16 GiB of memory for the whole mapped space:
in_addr_t u32_ip; // unsigned 32 bit int
struct port_field *p_target_bitfield;
int *p_target_int;
// ... to insert:
if (!(u32_ip = inet_addr(token))) { // `token` is string with actual ip (e.g. from text file)
printf("line %lu: IPv4 address not valid: %s\n", line_count, s_ip);
} else {
p_target_bitfield = &(ip_space[u32_ip]); // unsigned int ipv4 as 'index'
p_target_int = (int *) ((void *) p_target_bitfield); // cast bitfield* to int\*
// set bit at port_index:
*p_target_int = ((1 << port_index) | *p_target_int);
// now, port identified by `port_index` is at `(1 << port_index) | *p_target_int)`
// where `p_target_int` is pointer to port status bitfield cast into signed int32
```
It works - pretty nifty :) i'm sure i could make it much more pretty tho.
But a kind of 'columnar-bitfieldish' in-memory O(1) for everything:)*
Hm I guess you're right - I'm misusing the term. And yeah! Will experiment with it; what's neat is that it's not that much code to mess around with these basic notions...
Re: 224 << 24 - you're right; so many unusable actual addresses. It's just kind of neat to actually map out whole ipv4 space to memory. But yes lots of it unneeded, I'll see if I can add minimum-computation-possible mapping translation so that everything still stays ~kind of O(1).
Thank you for your comments! :)
edit P.S. 25 x 512MiB arrays - actually thank you, I thought of doing sth like that at first, but now forget why didn't start experimenting with that sort-of-actual-columnar-store from the beginning.. anyway, nice to quickly mess around with multiple base data layouts (I'll try that one next I think), would recommend anyone wanting to attain base knowledge on e.g. data layouts for data analysis...
> Instead of each node storing the next- and previous-pointers separately, store a single pointer which is the XOR of the two. Which is obviously an invalid pointer. But when iterating, XOR the previous node's pointer with the combined pointer to get the next node's pointer, and so on. You can iterate this way in both directions. Feels illegal. :)
Meh, it's not a pointer, it's not really any different from storing the difference between the two pointers, which obviously would let you iterate in both directions.
Yes, but as mentioned in TFA, storing the difference would require an extra bit. The difference between two 32 bit numbers is in the range [-2^32 -1, 2^32-1], needing 33 bits to store. The XOR is the same size as the original pointer, simplifying data flow.
But even so, storing a doubly linked list with only pointer differences and no absolute pointers (other than head and tail) feels illegal too
> storing the difference [of prev and next pointers] would require an extra [sign] bit [relative to XOR]
No it wouldn’t. Just let wraparound happen normally and things will work out.
Effectively what you need are functions
pair : ptr × ptr → uintptr
left : uintptr × ptr → ptr
right : ptr × uintptr → ptr
left(pair(x, y), y) ≡ x
right(x, pair(x, y)) ≡ y
Setting all three to XOR is one possibility. But
pair(x, y) = (x + y) mod 2^(width of ptr)
left(p, y) = (p - y) mod 2^(width of ptr)
right(x, p) = (p - x) mod 2^(width of ptr)
is an equally valid one, because addition and subtraction modulo any fixed value are related in exactly the same way as normal addition and subtraction.
Uh, no. Assuming 32-bit pointers, you'd add or subtract them using normal unsigned arithmetic which is modulo 2^32. Essentially the same trick definitely works because all these operations (adding/subtracting/xoring a number) are invertible.
The neat thing about xoring with a number is simply that that operation is its own inverse.
One difference between XOR and difference is that because of the symmetry of XOR, you could use the same code for walking the list forwards and backwards.
But you forgot! It's also a 3-wise independent linear hashing function! Which means it can be used for probabilistically approximately uniform sampling and counting of solutions to boolean functions. This is super-duper useful. We use it to build counters that give probabilistic, but proven, counts. I explained the idea here in more understandable terms [1].
Basically, it halves the solution space approximately correctly each time. So you keep on adding them, until you have say, 10 solutions. Then you multiply the 10 with 2^k, where k is the number of XORs you added. That's it! So cool, no? And it's super-scalable, because it haves it each time, so you'll get to, say, 10 pretty quick!
Some research papers are here [2,3]. I work on this, the tools are here [4,5]. In the last model counting competition, it dominated all other competitors, when combined with an exact counter, slides of the competition here [6].
One of my favorite XOR stories is from Bryan Cantrill (of Oxide, Joyent, and Sun) in this presentation [0] and this video [1].
To avoid clicking a link: When he was at Sun, he was chatting with a coworker (Roger Faulkner) about the lack of logical XOR in C. Faulkner said it was because you couldn't short circuit it, and Brian thought that was wild. Then Roger emailed Dennis Ritchie to ask and he confirmed it was what Faulkner had said.
That stories gets me every time! First of all, it's very funny and well delivered by Cantrill, but it's also just so incredible that they could ask the man himself.
C does have a logical XOR, it's the `!=` operator. Unlike other logical operators, it requires the arguments to be normalized to a single truth value. It plays nicely with C's convert-to-boolean operator `!!`.
DMR was remarkably kind, helpful and accessible. As an undergrad in the mid 80s I read about 'the first' port of Unix (v6) to something other than a PDP-11, the Interdata 8/32. On a whim, I sent an email to dmr@research.att.com (might have even been back as far as research!dmr or dmr@alice.UUCP) asking if there was more info about the architecture as 1) no google and 2) nothing in the uni library. A couple of days later he responded and asked for my (physical) address. A few weeks later, a copy of the instruction set summary manual showed up in my (physical) mailbox. It's IBM 360-ish. Still have it.
I strongly dislike the ubiquitous use of the name XOR a.k.a. "exclusive or" for this logical function, because almost always what is meant is "sum modulo 2" a.k.a. "parity", and not "exclusive or".
"Sum modulo 2" a.k.a. "parity" and "exclusive or" are 2 distinct logical functions, which happen to coincide only for the case of 2 input operands (because there exists only a single odd number less than or equal to 2).
For 3 or more input operands, what most people call XOR is actually parity, i.e. the logical function whose value is "1" when an odd number of input operands are "1".
For 3 or more input operands, "exclusive or" is the logical function whose value is "1" only if there exists only a single input operand whose value is "1", while all the other input operands are "0".
For computer hardware, parity is a much more important logical function than "exclusive or" (mainly because addition modulo 2 is used as a building block for implementing addition modulo greater numbers).
On the other hand, in mathematics "exclusive or" is a much more important logical function than parity.
For example, the quantifiers that express that some predicate is true for some elements of a set, for all elements of a set, or for a unique element of a set (like saying that an equation has solutions, or it has no solutions, or it has a unique solution), are based on the logical functions "or", "and" and "exclusive or".
In natural language, "or" always means either "inclusive or" or "exclusive or". It never means parity, i.e. what many programmers call "exclusive or" a.k.a. XOR.
While in programming the "exclusive or" logical function is a function whose computation is seldom needed, it is very frequently used for describing the behavior of a program, e.g. when saying that in a select/case/switch compound statement of some programming language either the 1st statement or the 2nd statement or the 3rd statement or ..., is executed, or when describing the types that the current value of a union/sum typed variable can have.
IEC 60617, the standard for electrotechnical symbols, gets that one right - the XOR gate gets labeled with =1, the parity gate with 2k + 1. But when you use circuit design software for PCBs or FPGAs you can still sometimes get bitten because you are getting something different than what you expected.
> For 3 or more input operands, "exclusive or" is the logical function whose value is "1" only if there exists only a single input operand whose value is "1", while all the other input operands are "0".
One could easily find quotations in plenty of logic and grammar texts from hundreds of years or even of millennia ago, wherever the meaning of the various words used for "or" is discussed and where the distinction between words expressing "inclusive or" and word expressing "exclusive or" is discussed.
However there is no need for quotations if you know English. English does not have distinct words for "inclusive or" and for "exclusive or", but the context usually allows to differentiate between the 2 meanings and "or" never means parity. Without enough context, "or" more frequently means "exclusive or", which is why people sometimes feel the need to say "and/or" instead of "or", to clearly signal that they mean "inclusive or".
When someone says to you: "This belongs either to Alice or to Bob or to Charlie", do you consider that this sentence would be true if you know that "This belongs to Alice and to Bob and to Charlie" is a true sentence?
That implication would be correct if English "or", when used with the meaning of "exclusive or", would mean the same as what "exclusive or" = XOR means for programmers.
Moreover, even if you accepted that implication, would you be able to claim that in this case your understanding is compatible with "or" having been used to mean "exclusive or" in that sentence? If you believed that implication to be true, what would "inclusive or" mean for you?
I am too lazy to give longer examples, but if you would not understand what "exclusive or" means in a natural language, but you would think that it means the same as in programming, then when someone would utter a compound of e.g. 6 sentences connected by "exclusive or", you would consider the compound to be true when any 3 or any 5 of the component sentences would be true, which is not the intended meaning of "exclusive or", which is that only one of the component sentences is true. Parity is neither "inclusive or" nor "exclusive or", because when "or" is taken to be "inclusive or", any even subset of component sentences may also be true, not only the odd subsets, like for parity.
Mathematics isn't English. You don't wear the ring of integers on your finger, and you can't play football on the field of real numbers.
You're the first person I ever see that pretends to be confused by what XOR means. It means what it means, what is this semantic debate going to do for you?
My handy real-world analogy for XOR is the light over a staircase in a home. There's a switch at the bottom, and another switch at the top, and both control the same light. Initially, they're both in the off position. You set the bottom switch, and the light turns on. You climb the stairs, set the top switch, and the light turns off although both switches are now in the "on" position. As long as one switch is in the "on" position and one switch in the "off" position, the light is on; otherwise, it's off.
Huh, maybe my electrician wired it up wrong in my office then. I’ve got two switches in the room but come to think of it they perform more like an AND gate than an XOR. In the living room there are two switches and those are definitely like an XOR.
The XOR light switch is a trick. And, even if you know it is possible, it is hard to figure it out without someone telling you. My uncle and cousin were doing their own electrical work and couldn’t figure it out.
I had seen the trick as a young kid and remembered it 30 years later: install the one of the switches backwards.
One switch takes in power and puts it on one of two wires running to the second switch. The second switch connects one of the two wires to the power wire going to the bulb.
If you don’t know the trick, you get an AND switch.
Ha. TIL that if you XOR the car emoji with 0x20 (ie. you make it "lower case"), you get the "no pedestrian" emoji. It probably won't encode below but this was copied and pasted from tfa. It seems too coincidental to be an accident - anyone who has insight to know whether this was done on purpose?
If you take this too far, you might get strange ideas, like the lower-case version of the car emoji being a ‘no pedestrians’ sign:
There's also the [kademlia distributed hash table](https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia...). The high level idea is that each node gets a random bit in the range [0, 2^m), and we define the distance to be XOR. We want to find a distributed algorithm to quickly get some information from X to Y, without knowing the whole network.
You can just look at the math and prove that it works, but my pet favorite visual interpretation of the algorithm is this:
Suppose you have a start node X, and it wants to find node k. Define the "X-distance tree" to be a binary tree, with leaf indices 0, 1, 2... But you put labels of X^leaf_index in each node, to represent the distance from a certain label to X. E.g. dist(x, x) = x^x = 0, so the label "X" (the original node) is put at the leftmost leaf (0).
The interval [2^i, 2^(i+1)) is some subtree of the X-distance tree. Let's say you know k's distance belongs in that interval, so you query some node Y in there as an approximate neighbor.
No matter what node Y you pick, the resulting prefix in the Y-distance tree will always be some permutation of the [2^i ,2^(i+1)) subtree we picked from the X-distance tree!
More specifically, my claim is that at the sets labels_of_leaves_of_X( [2^i, 2^(i+1)) ) == labels_of_leaves_of_Y( [0, 2^i) ). (recall that we index by distance, but the labels might be different).
There are far more rigorous comparisons to other DHT's like Chord, both mathematically and empircally. But to me, this visual intuition tells me what the "symmetry" of Kademlia means - sort of everybody's in their own local neighborhood, their own subtrees.
Whereas Chord, even if you implement it bi-directionally (which costs 2x memory! and seems even more treacherous to implement...), can't get this level of 'isolation' in some sense. The neighborhood sliding window of size S is always shifting: per bit, there's always 2^m distinct neighborhoods. Yea, even though most neighborhoods "look the same", it's just not pretty.
Kademlia has 1 + 2 + 4 ... + 2^m-1 neighborhoods, and it's all neat and tidy.
If anyone is wondering, this is the same Simon Tatham as in Simon Tatham's Portable Puzzle Collection. If you're unfamiliar and ever find yourself offline and bored, they're worth checking out.
I know I burnt many hours in high school playing them.
I emailed the author to correct the section on XOR swap:
Once upon a time, when I was an aggressive young programmer, I tried to show off by replacing this swap:
void swap(int a, int b) {
int c = a;
a = b;
b = c;
}
with xor swap:
void swap(int a, int b) {
a ^= b;
b ^= a;
a ^= b;
}
and the program broke!
They’re not identical. The program was calling swap on array elements:
swap(&(array[i]), &(array[j]))
and occasionally i=j. That is, swap was being called with a==b. And that’s the difference: when you try to swap a location with itself. In that case, XOR as you know zeroed out the value in a. But that was also b. The other xors left the zero there. So, the array element was set to zero.
The original swap called on a single location did not modify the value. So, my XOR-swap show off introduced a huge bug!
When I learned Z80 assembly to program my TI-83, every byte of machine code counted, because the whole calculator only had 24k of storage. To initialize the main accumulator register, `a`, to zero, you would do `XOR a` instead of `LD a, 0`. For the math instructions, `a` is an automatic operand, so `XOR a` XORs `a` with itself, and the entire instruction is only 1 byte. To explicitly load 0 into `a` requires the literal 0 to be represented in the opcode, so `LD a, 0` is a 2 byte instruction.
A lot of custom optimization solvers (e.g. Ising Machines) these days benchmark on XOR problems. It's a bit useless in practice because solving a bunch of XOR clauses can be done in polynomial time with Gaussian elimination, yet the solvers all show exponential scaling, but it works as a good way to gauge performance.
A second interesting implementation concerns the McEliece system. It's a public key cryptocipher from the 70s that is enjoying a renaissance these days due to being quantum-resistant. The decoding attack involves finding a solution to a set of XOR equations (again, polynomial), but where the Hamming distance is equal to some number (given, part of the public key).
Among binary quantized vectors, similarity is just popcount(XOR of the two vectors) / num_bits. Can be linearly translated to cosine similarity / distance range just multiplying and centering.
One tiny missing bit: drawing with XOR wasn't just to be able to easily recover the previous state. If you're on a black and white display and are dragging around a rectangular outline, then dragging it over a filled portion of the screen makes it invisible. Even if it's only partly over a filled image, you can't see the whole thing. With XOR, the filled portions will invert, so you can always see the extent of the whole rectangle.
Admittedly, if your screen is showing static (random noise), then you're still screwed.
The article mentions the Amiga... Despite there supposedly having been prior art, a company had got a patent [1] on drawing using XOR, which was then used to successfully sue Commodore.
The costs caused by this lawsuit have been claimed as having contributed to Commodore's downfall.
Even just hearing the monicker "xor" brings me back to the nineties, analysing viruses that sometimes "encrypted" – or at least obfuscated – themselves using a `xor` operation, or zeroing out a register in assembly code. A lot of nostalgic feelings and personal history for such a small logical operator...
What is the most efficient algorithm to generate an N x N array of the integers 0,1,2, … such that each entry is the smallest such integer not appearing either above in the same column, or to the left in the same row?
Posting it in this thread is a bit of a spoiler :)
XOR is also the basic element-wise multiplication in clifford and cayley-dickson algebras. the way that they differ and which gives them their individual character is all in the sign calculation.
- "these two databases have different SQL dialects"
- "did we miss a few rows due to poor transaction-isolation when trying to query recently changed rows on the upstream database"
- "is there some checksum of a region of cells that accepts any arrangement of rows and columns that doesn't require me to think about ordering?"
- "is there a subtle data format conversion problem silently mangling my data?"
...I've toying with trying to find a way to serialize everything consistently into something that can be XOR'd, then compare the output of XOR for two tables in two different databases that should be identical, without having to do some giant order-by comparison. Ideally, if necessary, this could be done by eye with two query windows pointing at two different databases...
Basically, Datafold's datadiff, but in a way that could plausibly be home-rolled for on-premise applications (especially quick spot-checks), and not be a total maintenance nightmare...
An algorithm I've heard of in multiplayer game programming, in which massive state must be transmitted to many clients ... xor the previous game state with the new game state on the server ... most of the result will be zeros because most of the new state is the same as the old state. Then compress it. All those zeros compress really really well which is great for network traffic. Then transmit the compressed xor'ed state, client decompresses and xors with current state to get new state. Don't remember where I first heard of this but it was probably 15 or 20 years ago.
Really clever. Never tried to implement such a thing though. Sounds fiddly and hard to debug.
I use this to remember what it does and also like thinking of XOR as an operation you can apply with a mask to flip bits (the linked article mentions this).
Every now and again one of these come up in a PR and I hard reject it every time. Its clever. Its neat. Its elegant. And it takes a giant ass white paper to explain it to people. Please please please don't use stuff like this when other people have to read your code.
Sorry but in what universe is xor too advanced for a programmer (someone who works with computers and numbers for a living) to understand? 99% of the article is just fun trivia anyway.
If a deep dive like this scares you, then the + operator should leave you absolutely terrified, just look at the length of this article: https://en.wikipedia.org/wiki/Addition
A) don’t be rude please.
B) yes. I get how it works. It’s not too advanced for me. It’s too clever. There’s a difference. In my experience something more readable and verbose would do in nearly every case I’ve ever seen someone try to use it.
I get your point. Inevitably someone sees “oh I can swap two values with 3x xor” and puts it in a real codebase. Write readable code. We probably don’t need to save the single word of memory in 2025.
Someone at work learned what a bitmap was and now uses them in dozens of golang structs as part of API bindings to know whether a field is set or not. It’s insane. Cool lower level tricks are neat but let’s not invent things that are difficult to understand or work with.
I agree for most things, not sure about the bitmap though. I'm not a golang user, but from what I know it doesn't have operator overloading, so you can't make a more elegant solution with proper types. Considering this, bitmaps are fairly readable and let you do things like easy comparisons without writing tons of && or ||.
As long as they are declared as enums or similar, I don't see the problem (IIRC the iota keyword let's you do something like FLAG = 1 << iota;).
Most compilers will compile both the three xor version and three variable version to the same machine code anyway, since modern cpu's have swap instructions.
Understanding basic boolean logic isn't just fundamental to programming, it's a particularly easy component that's usually taught very early, for both reasons.
It takes one table to explain XOR and it's the first one on the page. In ASCII:
a | b | a XOR b
--+---+---------
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
If a "programmer" seeing that still can't read code using a XOR, I'd fire them.
If anyone's inner CSS eye is twitching at the centered figures and captions but not centered text, open DevTools (F12 most of the time) and fix it with the following CSS (click the + on the right in Chrome/Brave where the CSS is to add new rules)
Instead of each node storing the next- and previous-pointers separately, store a single pointer which is the XOR of the two. Which is obviously an invalid pointer. But when iterating, XOR the previous node's pointer with the combined pointer to get the next node's pointer, and so on. You can iterate this way in both directions. Feels illegal. :)
reply