Any published authors here? I once went through a cheaper international edition of a large computer science text, and built a massive errata of all the mistakes (primarily found in the exercises) then submitted it to the authors who basically told me 'Thanks but we don't give a shit about that version, only the US edition'. Any reason there would be that many mistakes in the international version? Does Pearson publishers just use incompetent printers for these sell at cost books? It's not trivial errata, about 20+ major mistakes per chapter I found.
Edit: My version is "an authorized adaption, global edition" overseen by two Malaysian professors and printed in Malaysia by Pearson Global. I'm not sure what adaption means in the printing industry.
Edit2: One of the author's acknowledges the terrible amount of errors on their personal page https://www.amazon.com/gp/customer-reviews/RK8QVG9TMSUIF/ref... which I just discovered and did not know at the time I read the book (I bought this so I could understand MIXAL in TAOCP, Vol 1).
International Editions differ often in the problems at the ends of the chapters in order to protect US and other western markets. Differences in the problems means it discourages students from buying things that may lead to incorrect answers to assigned problem sets.
As for things just being flat out wrong, I can only imagine this discourages US/western students from buying International Editions to save money.
I bought an international edition of a text in college because it was priced significantly cheaper. And indeed, it had all the same problems, but they had been re-arranged, so when the professor assigned problems "5-15", I needed to know the mapping from the US 5-15 to the international version's numbers. Which, by quickly thumbing through another student's copy, I could generate pretty rapidly, but boy was that annoying. (I might have reconsidered purchasing the international edition had this been disclosed at the time of sale…)
We had a professor that kept mappings for 2-3 versions behind and basically allowed you to buy the used book on Amazon for about $12 compared to $200 for the new one.
It's these little things that make your professors stand out.
> It's these little things that make your professors stand out
The calculus text at my university was a “custom” edition that came unbound and shrink-wrapped and still cost as much as a regular new text. My calculus IV professor said many times over the first week: “Do not take your copy down to the document shop on Jackson avenue and run off copies of the relevant chapters for your friends, because that would be illegal. I say again, I cannot advise you take your copy down to the document shop on Jackson Avenue next to Pizza Hut and run off copies of the relevant chapters for your friends because that is IP theft and illegal.”
Fantastic professor—I really that semester because of him.
I was not a CS major but I had a number of friends that were CS majors and one of the courses used a book written by the professor, who is also the department head. The book cost like $200-$300. It was a shitty rewrite of the Sun Java documentation with a few added exercises and some reworded bits.
Fucking racket. Also something something cheating and plagiarism is bad unless you're the head of an academic department.
I believe edge17 means changing the exercises for the cheaper global edition, so students in western markets are forced to buy the expensive version as they may be assigned these end of chapter problem sets.
Yup, protecting in that the majority of the revenue for the publishers comes from the western markets (US/Europe).
It's the same reason companies like Apple don't sell less expensive phones less affluent markets - if they did the price spread would create an opportunity for arbitrageurs. Google et al however, does create less expensive alternative versions of their products for other markets.
I bought a mechanical engineering book, and I found the rent seeking quite clever: the international version(this one was targeted for the Indian market) did not have American units, only SI.
it's the link with money that I found interesting, $100 for the american units (I mean American book have problems in both system of units, aerospace, military and engineering for science tend to work in metric) and $30 for the SI unit only.
I used to be a physicist so voodoo around constants so that we end up mass having energy units was common but I cannot think off my head of problems better where imperial units are easier. There are certainly, I am genuinely curious which.
I published a technical book in the late 90s and the publisher (Pearson) opted to have it translated into a number of other (non-English) languages. I was never told anything about the translations. My assumption was that they used some sort of "contract translation people/service" and just "crank them out". My guess is they aren't that focused on quality. When my book was first published, technical books were much more profitable, but since then publishers have really struggled. The publisher I worked with was fantastic at the time, but since then has repeatedly stated they just want to "get books out there" regardless of the quality. And, I think the market was already tanking by the time my book was translated, so I'm pretty sure quality wasn't a priority for them. They sent me an after-the-fact sample copy of 2-3 of the languages it was translated into, but it was a total surprise to me and I never received any information about the process or that it was happening. In fact, it was so hands off that I have no idea if they translated it into more languages and never sent me a copy.
Another factor why the authors the OP contacted may not care could be that the royalty rates were dramatically lower for International sales -- at least that was the case for my book -- I essentially made nothing. I'm not saying people write books only for profit, but the combination of "not even knowing they were translated" with "zero profit motive" makes a potent combination. :-)
That being said, I don't believe anyone every contacted me with errata on a translated edition... I guess I should consider myself lucky. :-)
That book seems quite highly recommended but I remember glancing through it and even the US edition had lots of seemingly random but consistent capitalisation errors in the code examples, like someone had autocorrect/autoformat turned on throughout, which IMHO is enough to make me not recommend it --- it's already hard enough for a beginner to use correct code, nevermind code with such errors.
It worked well for my goal of satisfying the prereqs of Knuth that the reader is expected to have written a few programs already, which I presume means writing in an assembly language. Another book that could be used instead is 'Computer Organization and Design: The Hardware Software Interface (RISC-V Edition)' and MIT has lectures for it on youtube https://6004.mit.edu/web/spring19/resources/lectures should anybody else want to learn.
Editing is incredibly hard -- your brain tends to see what should be there and not what's there. Good copyeditors read syllable by syllable to break this up. (I am a tech editor, not a copy editor.)
I probably should have contacted them. The completely untrustworthy answers/exercises turned out to be somewhat of a blessing in disguise as I ended up really learning the material being that I had to figure out everything myself and carefully step through exercises to find the error. However I had luxury of free time to do so and wasn't panicking under pressure of a class schedule and this text is used for a lot of undergrad courses.
> People also say that TAOCP is irrelevant or outdated or otherwise inapplicable to “real programming”. This also wrong. For instance, the first section after the chapter intro deals with the basic problem of searching for an item in an unsorted array. The simplest algorithm should be familiar to all programmers. Start your pointer at the head of the array, then do the following in a loop:
Check if the current item is the desired one. If it is, return success; otherwise
Check if the pointer is past the array bound. If it is, return failure; otherwise
Increment the pointer and continue.
> Now consider: how many bound checks does this algorithm require on average? In the worst case, when the array doesn’t contain the item, one bound check will be required for each item in the list, and on average it will be something like N/2. A more clever search algorithm can do it with just one bound check in all cases. Tack the desired item on to the end of the array, then start your pointer at the head of the array and do the following in a loop:
Check if the current item is the desired one.
If it is, return success if the pointer is within the array bound and return failure if it isn’t; otherwise
Increment the pointer and continue.
> With this algorithm, things are arranged such that the item is guaranteed to be found one way or another, and the bound check only needs to be executed once when the item is found. This is a deep idea, but it’s also simple enough even for a beginning programmer to understand.
All this is true, but it overlooks one very important thing: on a modern processor architecture, this "optimization" will almost certainly be completely useless [1]. Almost certainly, the bounds check will be pipelined in such a way that it is essentially free on every iteration. Almost certainly, for a problem of any size where efficiency actually matters, the run time will be dominated by memory access latency.
So this is not a very good example to refute the argument that TAOCP is irrelevant and outdated.
[1] In fact, it will almost certainly be worse than useless because the extra setup and teardown steps required at the beginning and end will make the algorithm run more slowly than it otherwise would have.
naive search
found: 0, took 1856721 ns
found: 0, took 1799554 ns
found: 0, took 1908483 ns
found: 0, took 1921622 ns
found: 0, took 1856173 ns
found: 0, took 1812736 ns
found: 0, took 1819938 ns
found: 0, took 1846232 ns
found: 0, took 1821858 ns
found: 0, took 1898503 ns
average: 1854182.00 ns
knuth search
found: 0, took 1969758 ns
found: 0, took 2125165 ns
found: 0, took 2081280 ns
found: 0, took 2033002 ns
found: 0, took 1948852 ns
found: 0, took 2049150 ns
found: 0, took 2046759 ns
found: 0, took 2101704 ns
found: 0, took 2083800 ns
found: 0, took 2120793 ns
average: 2056026.38 ns
edit:
I was wondering how it would look on a simpler CPU, presumably without branch prediction and my router came to mind. Indeed there the picture looks different:
Qualcomm Atheros QCA9558 (MIPS 74Kc V5.0)
naive search
found: 0, took 20582795 ns
found: 0, took 21233303 ns
found: 0, took 20505486 ns
found: 0, took 20633735 ns
found: 0, took 21079799 ns
found: 0, took 20619782 ns
found: 0, took 21099067 ns
found: 0, took 20558769 ns
found: 0, took 20398468 ns
found: 0, took 20888357 ns
average: 20759956.00 ns
knuth search
found: 0, took 16323322 ns
found: 0, took 16530714 ns
found: 0, took 16255532 ns
found: 0, took 16416161 ns
found: 0, took 16577751 ns
found: 0, took 16488995 ns
found: 0, took 16417074 ns
found: 0, took 16466930 ns
found: 0, took 16295522 ns
found: 0, took 16277085 ns
average: 16404909.00 ns
Well kudos to you for having the right theory! I would have never assumed this behavior on modern CPUs.
That trick might still be useful in the embedded space - I had to try it on my samr21-xpro too (with only an 8kiB array and slightly modified code - http://codepad.org/aPOZraI2)
samr21 (Cortex M0+)
naive search
found: 0, took 1714 ticks
found: 0, took 1713 ticks
found: 0, took 1714 ticks
found: 0, took 1714 ticks
found: 0, took 1714 ticks
found: 0, took 1713 ticks
found: 0, took 1714 ticks
found: 0, took 1713 ticks
found: 0, took 1714 ticks
found: 0, took 1714 ticks
average: 1713 ticks
knuth search
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
found: 0, took 1372 ticks
average: 1372 ticks
Certainly I'm not saying that this is the algorithm to use in all cases. It's not even obvious how you would implement the sentinel algorithm in a language like Python, not in idiomatic code anyway.
The point is that the book is written from the perspective of an actual programmer writing actual code and thinking about how it will run. Granted, he's writing in a pixie assembly language for an imaginary machine, but still, the thought process is similar to writing real code. He doesn't just figure out the big-O and call it a day, he works out the low-level details. The lesson in this case is that a lot of extra work can be done in a loop, even for the simplest of algorithms.
Now you might think, that's a truism, duh, any programmer knows that loops need to be fast. But what can I say? I read that section and spent a few hours thinking about it (even writing out some old-timey flowcharts), and I came out the other side writing better, faster code.
With some improvements to pantalaimon's test, and compiling with -O3, Knuth's scheme is 25% faster than the naive search on my 2012 MacBook Pro with latest developer tools (Apple LLVM version 10.0.1 (clang-1001.0.46.4)):
The sources are in http://codepad.org/G2SBmTnC and the changes were mostly removing constants here and there to keep the super-smart compiler from unrolling the loop in the naive case (which in the real world, it wouldn't, as the size of the array would not be known at compile-time). Also, searching for a 32-bit integer is way more realistic than searching for a byte!
The inner-loops are pretty much as you'd expect (although why it uses two registers for addressing the array is beyond me).
Knuth:
LBB1_1:
cmpl %edi, (%rdx,%rcx)
leaq 4(%rcx), %rcx
jne LBB1_1
Naive:
LBB2_8:
cmpl %ebx, (%r12,%rdx)
je LBB2_11
addq $4, %rdx
cmpq %rdx, %rax
jne LBB2_8
Of course, all the comments about not being thread-safe are correct, and you probably only want to do this trick in primitive environments where you can allocate an extra slot at the end of the array when it's first created. But it's a nice trick when circumstances call for it.
Wow, that is so weird. I would have expected this to be entirely dominated by memory latency.
I'm also scratching my head trying to figure out why this only shows up with -O3. I would have expected the exact opposite, i.e. a bigger difference on lower optimization settings.
First I thought this was because in drfuchs' version both the array and the key are completely random, so it's not likely the key is found before the end of the array is reached. This is of course the more realistic scenario, whereas my test was only testing the worst case where they key is not part of the array.
However, when I modify it to not include they key in the search array (`myarray[i] = random() & 0x7fffffff;`, `key = 0xffffffff;`) I see the same behavior:
Yeah, I was careless on not preserving the nice feature of your code that kept there from ever being a match, figuring that the chances were low of a hit; but of course that assumption went out the window when the array got way bigger. After posting, I noticed the issue, so I made a similar change to yours, and the results didn't change. I do suggest that you change the line that sets 'key' to not make it known at compile-time, to avoid unrealistic optimizations, thus:
const int key = random() | 0x80000000; // ensure no match
It's already questionable for a search algorithm to require the ability to write to the array it's searching in, but it's even worse if it wants to be able to extend and shrink the array to do so.
Edit: I haven't read the book. Presumably it gives some context for the algorithm that you only bring it out in the specific situation where it makes sense to use it.
Extending the array is not needed: you can store the last element in a variable, then overwrite the last element with the searched value, do the search then restore the last element.
I remember a talk from Alexei Andrulescu(sp?) where he talk about this optimisation but I don't remember if there was benchmarks.
That's clever, but it still requires writing to the array, which is dodgy in many circumstances; other threads may be using it, so the search function is likely to be given only read-only access.
Perhaps even more importantly, requiring the array to be mutable hurts composability in the age of multithreading, as it prevents the algorithm from being used concurrently on the same array by multiple threads.
I wish I could find a good text that talks about how removed from 'directly executing on hardware' you are. Modern [x86] processors are (as described) executing very much out of order or calculating things in parallel. I wish I could find something that spoke to the whole field of tricks at play. I'm sure there's a great depth of indirection I'll never understand.
- Stijn Eyerman, Lieven Eeckhout, Tejas Karkhanis, and James E. Smith. "A mechanistic performance model for superscalar out-of-order processors." ACM Trans. Comput. Syst. 27, 2 (2009)
- http://www.elis.ugent.be/~leeckhou/papers/tocs09.pdf
- Maximilien B. Breughe, Stijn Eyerman, and Lieven Eeckhout. "Mechanistic analytical modeling of superscalar in-order processor performance." ACM Trans. Architec. Code Optim. 11, 4, Article 50 (2014)
- https://users.elis.ugent.be/~leeckhou/papers/taco2015-breugh...
Except that the optimization seem to be quite spectacular.
Compiling pantalaimon's code, there doesn't seem to be any significative difference between the two algorithms with the default compilation parameters of gcc.
However compiling with gcc -O3, Knuth's algorithm is almost twice as fast on an Intel i7-7700HQ, old tricks still work well:
naive search
found: 0, took 597719 ns
found: 0, took 601291 ns
found: 0, took 593332 ns
found: 0, took 592513 ns
found: 0, took 592799 ns
found: 0, took 592409 ns
found: 0, took 592563 ns
found: 0, took 592289 ns
found: 0, took 631602 ns
found: 0, took 592928 ns
average: 597944.50 ns
knuth search
found: 0, took 300860 ns
found: 0, took 318691 ns
found: 0, took 317502 ns
found: 0, took 317348 ns
found: 0, took 351612 ns
found: 0, took 317625 ns
found: 0, took 318217 ns
found: 0, took 312106 ns
found: 0, took 397532 ns
found: 0, took 318284 ns
average: 326977.69 ns
Just for fun, here is the dump of assembler code for function search_knuth:
Optimizations can be fun, but all bets are off when next generation of hardware comes out, and a thousand man-hours spent on compiler/optimizer. And most performance increases the last decade have come from multi-process/threading, while single core execution have actually become slower. I think in the future it will be increasingly more important for algorithms to work in parallel, so the performance can scale with the number of CPU/cores or Moore's law.
> but all bets are off when next generation of hardware comes out
the sentinel trick reduces the number of checks, so should be independent of hardware.
> and a thousand man-hours spent on compiler/optimizer
I can't see a conventional compiler ever automatically doing the sentinel trick as it relies on knowing at least 1. whether multithreading will be happening 2. the size of the array, as the trick's likely increasingly pointless as it outgrows the cache, and possibly other stuff I can't think of.
I might also disagree on making things parallel - cache aware code can bring significant speedups on single threads so that first, then parallise.
disclaimer: I'm not Knuth/Terje Mathisen/Hennesy or Patterson.
Also doesn't "tack on the end of the array" imply allocating new memory and copying the whole array? It might make sense for a linked list, or a self expanding list whose backing storage wasn't full. Even then there is the cost of pulling the end of the array into cache up front.
>All this is true, but it overlooks one very important thing: on a modern processor architecture, this "optimization" will almost certainly be completely useless [1]. Almost certainly, the bounds check will be pipelined in such a way that it is essentially free on every iteration.
Interesting, that. I've been reading here on HN a few times, that assembly language is harder to write / analyze nowadays because of such features of modern processors.
Regarding the optimization being useless:
That technique is called using a sentinel, and was common in algorithms earlier, maybe before modern processors' pipelining and suchlike features existed. E.g., it (using a sentinel to avoid one comparison per iteration - the check for passing the array end - in a search) is mentioned in the classic book "Writing Efficient Programs" by Jon Bentley. (great book, BTW. I owned and read most of it.)
Interesting. I have a 1973 (1st edition?) copy of Sorting and Searching I picked up at a book sale a couple years ago. In that version chapter 6 starts on page 389, and the typo is not there. I guess it was introduced later.
I can't remember where I read this, but I think the first edition of Volume 3 was the last one to be published before the creation of TeX, and so the second edition had to substantially reworked. The typo was probably introduced at that time.
I'm curious, unlike the other two, wouldn't that typo be caught by any sort of automatic grammar checker? Couldn't anyone looking for a Knuth Check just load the text into a decent checker and go through the suggestions to find a valid one?
This is awesome! I'll second the advice to try out the book. I'll have to watch for small errors like this to point out, as well. :)
I'm currently reading this with some folks at work that simply run circles around my ability in the mathematical sections. It is humbling, but also very fun. The best is seeing the process on how to get through these sections. If you are like me, you assume it is someone doing a straight forward walk through the steps. My colleagues don't hesitate to just start with an idea and see where it can go. Often, it is not necessarily algebra or anything else that gets to the next part, but a simple question of pattern recognition on the series we have put on the board. But, as often as not, mistakes are made and have to be tried again. The folks that get things right the most are not necessarily those that make the fewest mistakes.
Someone found and submitted 700+ errors. I wonder where he finds the time to send all of these out! I guess it’s like an open source project getting PRs.
Knuth gives higher rewards for works he believes have fewer errors. For TeX and METAFONT, he started with 1 cent for every bug found, and doubled it each year until reaching the current value of 327.68 dollars (= 0x$80.00): for example Eberhard Mattes and Oleg Bulatov (joint 7th on the list) have all their amount from a single (and the latest so far) bug found in TeX and METAFONT each (if you're wondering what the bugs were: https://tex.stackexchange.com/questions/154873/ — the TeX bug was a missing space in the debug output for a macro with an empty name).
With smartphone deposit-by-photo, you could cash the check [if it were real] and keep the check to display. (Totally agree that if you had to send it in that keeping the check is worth well more than the value of the check.)
PS: It took me way longer than it ought to have to figure out the Bank of San Serriffe joke...
Unless that German guy actually lives in the US, I'm surprised he was actually able to a) cash an American cheque somewhere and b) not have to pay an exorbitant service fee for cashing it. Cheques are approximately as common as unicorns in Europe.
My bank informs me that they would charge between £6 and £10 to accept such a cheque as payment into my account - depending on exactly the amount, how it was issued and so on. I guess if you literally wanted it turned into cash on the spot you'd pay a much larger fee corresponding to the risk they'd be taking on that deal.
When I was young I got a check from Google's AdSense. My bank (huge in Scandinavia) told me they don't know what to do with it and will call me back. Eventually they offered 20 euros + 9% to cash it. Felt like robbery but no one really even knows what a check is here, it's extremely rare.
Maybe he just likes working through text books thoroughly?
I personally find it quite valuable to compile a list of "outstanding questions" for the text books I read. Usually most of my questions are resolved before I finish the book. The ones that remain, often turn out to be simply errors. Finding the the errors is just a by-product of the learning process for me.
”In 1960, Karatsuba attended a seminar wherein Kolmogorov pitched his n2 conjecture. 3) “Exactly within a week” Karatsuba devised his divide-and-conquer algorithm.
[…]
Thus the error is that 1962 should be 1960.”
Based on the information given, the correct year _could_ be 1961, too. When, exactly, was that seminar? Were Soviet universities at the time closed over Christmas?
Apparently the seminar was held "in the autumn of 1960" [1], so "within a week" seems safely within 1960. That said, I will happily award 0x$0.50 to anyone who can come up with conclusive proof that the algorithm was discovered in 1961. Karatsuba himself is dead though, so such evidence will not be easy to find.
TAoCP will always have a special place in my heart. It was my very first Computer Science textbook and what ultimately made me fall in love with the field. I remember reading about Knuth's legendary checks and going through whole chapters thinking how cool it would be if I ever found one. Good times!
Not really, since a “hexadecimal dollar” is really just a fictional measurement of money that Knuth has created (which he designates with symbol 0x$). I guess if it were $0x3.00, you could interpret the quantity to be regular dollars in hex form, which would just evaluate to 3, instead of the 7.68 Knuth is actually offering
I once read a short biography of Donald who put forth the image of a gentlemen who often eschewed most of lifes pleasantries and small enjoyments in pursuit of spending most of his life in his library, studying, with little brain capacity remaining for anything else.
Given this, I pictured a somewhat solitary gentleman - a slightly more dignified version of a modern nerdy basement dweller, disconnected from modern life and humour.
Therefore I find great joy in seeing the words "Bank of San Seriffe". I giggled quite a lot at this. :)
Ps. This description is probably entirely wrong. It's just how I always pictured him as a result of reading that article.
His first publication, as a high-school student, was in MAD magazine. His books are full of jokes, both highly technical ones and more trivial ones. :-)
If you have some mathematical inclination and a couple of hours to spare, you may like watching his (1-hour) Christmas lecture from 2017: https://www.youtube.com/watch?v=BxQw4CdxLr8 — you may need to listen carefully (he's over 80 after all), but some of his humour comes through at moments.
It's a _really_ long interview, +7 hours. But it's also superb. He is interviewed by his friend and colleague Edward Feigenbaum who does a stellar job. He brings out the human Donald Knuth in a remarkable way.
It gave me great respect and admiration for Donald Knuth life and achievements. For me I dare to say it was transformative in regards to human pursuit of knowledge, passion and what it means to be a intellectual human being.
td;dr: Watched a Donald Knuth interview and got a nerd crush
This is impressive; congratulations to the author! I too got a Knuth check for 0x$3.00 a year or two ago... but that was for errors in the unpublished (draft) pre-fascicles; finding some in the published volumes of TAOCP is surely a rare event!
A few more comments:
• A huge +100 for the paragraph pointing out that TAOCP is not a reference work; it's a very enjoyable work meant to be read:
> By the way, if you’ve ever thought about reading TAOCP, give it a try. A lot of people will tell you that it’s a reference work, and it’s not meant to be read straight through, but that isn’t true. The author has a clear point of view and a narrative and an idiosyncratic style, and the only thing that inhibits readability is the difficulty of the math. There’s an easy solution to that though: read until you get to math you don’t understand, then skip it and find the next section you can understand. Reading this way, I skip at least 80% of the book, but the remaining 20% is great!
In addition, there are many jokes, beautifully employed quotes, etc. Basically what Knuth has done is to take all the published research literature on each of the topics of the respective chapters, digest it, pass it through his personal “interestingness” filter, and figure out what he feels is the best way to teach it. The result is highly personal, and not all what one may expect from “reference work”.
• To be pedantic, 0x$3.00 (aka $7.68) only puts you in a 29-way tie for the 115th richest person in the Bank of San Seriffe — it's the 69th largest amount but you need to count all the people with each higher amount :-) Also note that this BoSS only counts checks since 2006.
• I actually find my name on the list with another 0x$1.00 (and I think I know what it was for), but I never received a check for it: probably lost in the mail, in which case I'll never forgive USPS for it. I have however received many replies from Knuth saying that the errors I tried to point out were not actually errors, or that they had already been pointed out by others, just not updated on the website yet (in the case of the draft pre-fascicles). You send him an email (only if it's a bug; else the email never reaches him!), his secretary who comes in once a week prints it out for him, he gets to it at some point, scrawls his response in pencil, encloses a check if warranted, then his secretary sends it back to you by post. Exactly as described here: https://cs.stanford.edu/~knuth/email.html (I once snuck in my solution to one of his exercises (which asked to write a poem) and he wrote “Beautiful!” next to it, which I will value more than the check itself.)
Does the algorithm described for finding a number in an unsorted array using minimal checks really work? Probably is addressed more in the book but didn't see my concerns in the post.
The algorithm I am referring to is:
"
1. Check if the current item is the desired one. If it is, return success if the pointer is within the array bound and return failure if it isn’t; otherwise
2. Increment the pointer and continue.
"
I think it assumes that all values will be found in memory somewhere (otherwise an exception will be raised for trying to read memory at an address that does not exist). This is probably the case in practice but I don't think there are guarantees of it being the case.
Also, this algorithm can have lower worse-case performance than a more naive solution because its performance is based on the number of possible values (assuming random distribution) rather than the size of the array for cases where a value cannot be found.
Read the sentence before it: ”Tack the desired item on to the end of the array, then start your pointer at the head of the array and do the following in a loop”
> Does the algorithm described for finding a number in an unsorted array using minimal checks really work? Probably is addressed more in the book but didn't see my concerns in the post.
> I think it assumes that all values will be found in memory somewhere
That assumption is not made; the post does cover this. You left out the beginning of the algorithm:
> A more clever search algorithm can do it with just one bound check in all cases. Tack the desired item on to the end of the array, then start your pointer at the head of the array and do the following in a loop:
It's not clear to me how you're supposed to "tack the item on to the end of the array", though. That's not an operation arrays naturally support. If you recopy the array so you can add your sentinel value, then you're replacing an average n/2 bounds checks with a guaranteed n copies.
>It's not clear to me how you're supposed to "tack the item on to the end of the array", though. That's not an operation arrays naturally support.
It's a small change to avoid the append. First check the final array item separately, and return success if it matches. If it doesn't match, overwrite the final array item with the desired item, and continue as before except this time checking if you're within array bound - 1.
It probably is easier to reserve an array item for that sentinel marker.
In either case, it requires your array to be writable, and you’re giving up having multiple searches through the same array operating at the same time.
Also, on modern hardware, the ‘ran out of items’ check is essentially free.
Nowadays, the only realistic use for this would be a case where you have a fixed-size array that you search frequently, where the same end marker can be used all the time, so that you can add that sentinel marker once.
I’ve a hard time thinking of examples, but do not rule out cases inside OS kernels or search trees for computer chess or similar games.
A variant would be to use this in C, and, when the program allocates N+1 chars to hold a string of length <N, allocate N+2 chars, and set the last one to zero, to ‘guarantee’ that later strlen or strcpy calls don’t run on forever, even if a strncpy copies N+1 bytes. I don’t see how that ‘guarantee’ would add much, though.
>Nowadays, the only realistic use for this would be a case where you have a fixed-size array that you search frequently, where the same end marker can be used all the time, so that you can add that sentinel marker once.
The sentinel needs to be identical to the value you're searching for, otherwise you're trading "compare a pointer" for "compare a value with an arbitrary equality implementation" which is no better (and probably worse).
Thanks everyone; I actually heard of this algorithm before but wasn't thinking of it today and missed this when I wrote the above post.
>It's not clear to me how you're supposed to "tack the item on to the end of the array", though
The algorithm mentions pointers so I think it means temporarily change a value in memory that is after the array with the array value. I'd be worried about how this would work if there are multiple threads, though, in case another thread wants to access the memory that was temporarily swapped out. (Edit: as other poster mentions, would be helpful to have an extra entry in the array for this purpose.)
If the 'array' is internally something like a vector, then you can add it on to the end quickly by appending a node to the tail of a linked list.
It'd be problematic for some operations, but if your access is dominated by reads, then setting aside an extra entry for a sentinel value 'only' requires serialising operations that needs the sentinel value and mutations. You'll need to protect against mutations during the search anyway, so then it boils down to if its better to wait or fall back to a bounds-checking version.
Though I'd question if it's worthwhile vs. some unrolling (you might even have a use for Duffs device...) to just do whatever number of iterations you want per bounds check.
Yes, multi-threaded access to the array would invalidate the assumptions made when designing that algorithm, as so often happens.
And copying the array would cost more than you would save, so you would want to be sure that you don't have to do that. In general a growable array implementation will double the storage of the array (or multiply it by some other factor, such as the square root of 2) whenever it does grow it, so that it doesn't immediately have to grow it again. You could just arrange for your implementation to grow whenever there's only one storage location left, and then you could use this trick when searching it.
But recall that this algorithm was designed in a context where neither of those constraints apply.
I think a more typical, high performance operation for copying arrays would be using mem copy operations. This is how Go handles a lot of slice operations under the covers.
If it's a C style array you could always just keep it one larger than the actual size; perhaps place a null value at the end and swap it out as needed. Just build an abstraction over it.
You're missing some of the implementation. From the blog:
> A more clever search algorithm can do it with just one bound check in all cases. Tack the desired item on to the end of the array, then start your pointer at the head of the array and do the following in a loop: ...
So you will do exactly one more value check in the case where the element is not in the list, and n - 1 less bounds checks.
I had a comp sci prof back in the early 1970's. Someone told me that he had gotten a check from Knuth for finding an error. I saw him off campus once and I had to check to see if his feet actually touched the ground. I swear that they didn't.
I found and reported decent errata in a popular Django book this year and got no reply from the authors.
I heard the book has not been financially successful enough. That said, it feels like books should have some kind of GitHub style community errata by default.
News is not about facts. Fake news even less so. Why would someone with an agenda reward fact-checkers?
On the other hand, reputable news outlets already accept and publish corrections. Of course, nobody reads them; the damage is already done. Unlike a book, there won’t be future readers who would benefit from the correction.
Check if the current item is the desired one. If it is, return success; otherwise
Check if the pointer is past the array bound. If it is, return failure; otherwise
Increment the pointer and continue.
Um, shouldn't the check be done before accessing the current one?
> the first section after the chapter intro deals with the basic problem of searching for an item in an unsorted array. The simplest algorithm should be familiar to all programmers. Start your pointer at the head of the array, then do the following in a loop:
> Check if the current item is the desired one. If it is, return success; otherwise
> Check if the pointer is past the array bound. If it is, return failure; otherwise
> Increment the pointer and continue.
> Now consider: how many bound checks does this algorithm require on average? In the worst case, when the array doesn’t contain the item, one bound check will be required for each item in the list, and on average it will be something like N/2. A more clever search algorithm can do it with just one bound check in all cases. Tack the desired item on to the end of the array, then start your pointer at the head of the array and do the following in a loop:
> Check if the current item is the desired one. If it is, return success if the pointer is within the array bound and return failure if it isn’t; otherwise
> Increment the pointer and continue.
> With this algorithm, things are arranged such that the item is guaranteed to be found one way or another, and the bound check only needs to be executed once when the item is found.
It sounds like he's saying "feel free to overrun the buffer; eventually you'll find the value you want somewhere in the depths of memory, and then you can check and see that you've left the array behind long ago." Aside from the obvious problems, it's not at all guaranteed that the specific bit sequence you want will exist anywhere in memory.
OK, even if it isn't possible to cheaply extend the array by one, do this slight tweak for the same end result: store the last item in the array as a temporary, and replace it in the array by a sentinel value. (search the array). Once you find the value, if the index isn't that of the last entry, you've found it. If it is the last entry, check the temporary to see if it happened to be the value you were looking for. Finally, replace the last item in the array by the temporary value.
He’s saying that if you append the value you’re looking for at the end, you don’t need a bounds check since you are guaranteed to find it at the very end.
I got two (real) checks from him from the newly re-typeset first volume in the late 90s. Someone else did a bunch of the typesetting work and it was relatively sloppy so I managed to notice them from relatively casual reading.
I came here expecting to find some sort of problems on mathematical proofs, since he was quote saying "Beware of bugs in the above code; I have only proved it correct, not tried it." and I got out disapointed for two human mistakes and one minor historical hiccup.
I wonder if anyone cashes these things? We should all gang up and cash these checks together and overdraw him. Or at least really screw up his check book balance.
I'm quite proud to have found an error on page arabic one. In the first sentence. The very first word. Not a typographical error.
Nobody can claim he did not read that far.
I still believe I should have gotten a cheque for my bug report that "the programs are in the public domain" or similar wording on the back cover of the MMIX book is clearly wrong, since the copyright notices assert copyright, but Knuth dismissed that along the lines of "I'm not a Stallman disciple". Even though "public domain" has a clear legal meaning. Oh well, whatever, I've got 0x$1.20, that's enough.
Perhaps it would be clear if it was $0x3.00 , but I see that 0x$ is the way Knuth writes it on his reward checks... I wonder if anyone pointing this out might get one too.
Edit: My version is "an authorized adaption, global edition" overseen by two Malaysian professors and printed in Malaysia by Pearson Global. I'm not sure what adaption means in the printing industry.
Edit2: One of the author's acknowledges the terrible amount of errors on their personal page https://www.amazon.com/gp/customer-reviews/RK8QVG9TMSUIF/ref... which I just discovered and did not know at the time I read the book (I bought this so I could understand MIXAL in TAOCP, Vol 1).