Hacker News new | past | comments | ask | show | jobs | submit login
“It is never a compiler error” (plover.com)
289 points by zeveb on Nov 14, 2017 | hide | past | favorite | 272 comments



Sadly, I've lost count of how many compiler bugs I've tripped over through the years. Often it's compiler crashes -- those are usually easy to get fixed, especially when (as tends to be the case with LLVM) they are caused by assertions failing -- but I've also tripped over compiler hangs (there's a variable in the tarsnap code with an utterly bogus volatile specifier in order to avoid the problematic optimization on some versions of llvm) and outright miscompilations -- one of which has had several LLVM bug reports opened by different people over the years and remains unfixed.


To be fair, I've also lost count of the number of times my compiler has compiled things correctly. ;)


Yes, let's not anger our compilers now...


Which, lets not forget, includes several times every time you load a website.


Please don't remind me...


If you’re the kind of programmer who is likely to find compiler errors, you probably know who you are. For 99% of programmers, the chances that you have discovered one is small.


This is just plain wrong.

It's a terrible idea to assume that other humans don't make mistakes. When encountering a bug, start at the top, and work your way down when you conclude that the current level is correct. There is no level of development that is immune.

Sometimes you are to blame. Others, it might be a library. It might be the standard library, or the compiler. A faulty optimization, perhaps. Could also be the OS kernel, or even the hardware (CPU, USB controller, network card, ...). It's all chock-full of bugs.

I've hit a handful of miscompilations while writing boring JavaScript web apps, some only when the JIT kicks in for extra fun. I have a handful of outstanding Linux kernel bugs, one of which cause memory corruptions, has existed at least since 2.6 and is dead simple to reproduce. I have a significant GCC performance degradation bug, triggered by calling "pthread_exit(3)" after an infinite loop. I've had what seemed to be software problems turn out to be PCIe problems.

You might think that, as a developer, you're building a card-house on a solid foundation. But no, it's card-houses all the way down.


“It is never a compiler error” means don't waste your time blaming the compiler, first assume it's your fault. For every person who genuinely ran into a compiler bug, there are orders of magnitude more who think they did, but didn't. If you start saying "it could be a compiler bug," you're encouraging them to waste their time.

> When encountering a bug, start at the top, and work your way down when you conclude that the current level is correct.

I agree wholeheartedly. Saying "it is never a compiler error" doesn't preclude this; it encourages this behavior. Even thinking "it could be a compiler error" invites laziness.


> Even thinking "it could be a compiler error" invites laziness.

Or worse it could invite curiosity and now you are spending time wondering how could you tell if it's a compiler bug and how would you go about debugging it.


Or worse it could deter curiosity and now you are spending time complaining about compiler bugs instead of debugging your code.

If it's really a compiler bug, you'll find out in the process of debugging your code.


Laziness invites efficiency.

I strongly disagree with your point of view: "it is never a compiler error" not only precludes suspicion of the lower levels—voicing your suspicion of problems in lower layers will generally result in ridicule!—but it reducing critical thinking, bars people from looking into "complicated" projects, and makes them overall less efficient and skilled developers. I also disagree that encouraging a bit of wasted time is a problem. What is a few hours, or even days "wasted" learning your lesson once and for all if it greatly improves your debugging skill and efficiency? You do not learn without mistakes.

"It is never a compiler error" is a detrimental mindset to have, in that it suggests a mentality where you shut your eyes for anything you did not author. After all, it is too complex, too well-tested, too well-designed, and implemented by much smarter people than you. Compiler, library, application, hardware, train, bridge, it doesn't matter.

Not only does this mean that you often won't find the bug you hit, as you won't start instrumenting the faulty LargePopularLibrary—after all, it obviously can't be to blame—but it also means that you won't as much as read library|compiler|kernel|etc source code, much less do development on it, as you think it is above you, despite it by no means being some magical black art.

"It could be a compiler error" invites you to consider all possibilities. Immediately blaming the compiler without reasonable evidence would, despite such a mantra, be a result of terrible debugging skills. These debugging skills should be improved through other means than intentionally teaching falsehoods, and if the subject is not an entirely lost cause, it should correct itself in much more valuable ways if a little time is "wasted" on improper debugging.

Teach proper debugging techniques instead of lying to others, or yourself.


You seem to be arguing against a straw-man, sir.

The OP simply said that the chances of finding a compiler bug are small. You seem to have found a few minor bugs, and that's cool, but how does that change the fact that chances an average C developer finding a compiler bug are minuscule? There are probably around a million C developers, do we even have one compiler bug per 10 developers?


> You seem to have found a few minor bugs, and that's cool

I'm sorry, how are total miscompilations and memory corruptions "minor bugs"?

I am arguing against the mentality of dismissing compiler bugs, with the idea that "you will never hit one". Hence my examples of when I hit them during something as high-level as web development. I am not arguing that you'll hit such bug every day, or even every month.

I only have myself and those around me as reference. I have been a software developer for less than 10 years, but with the current total, I have been hitting a few "lower layer" (vm, stdlib, compiler, kernel, hardware) bugs a year.

Maybe I'm just "unlucky". If you want to find real numbers, take a look at GCC/LLVM/libstdc++/glibc/linux bug tracker statistics. I'm sure you will be surprised.


I worked a compiler backend for 5 years. Despite the insane amount of testing, fuzzing, standards tests, etc that we did, there were always bugs coming in, and not always in code that looked dodgy. Sometimes an optimization interacts with another in JUST the right way to cause incorrect code to get generated.

Honestly, I'm surprised that I haven't run into more of them in the wild.


> If you’re the kind of programmer who is likely to find compiler errors, you probably know who you are.

I'm one of those that routinely find a bug in a library during the very first use of its API. Every "quick weekend project" turns into "let's learn the build system of this library and send a patch/bug report to the maintainer".

Does anybody know how can I turn this annoying superpower of mine into a job that is not basic Q/A?


Get into an industry that values Q/A a lot. Aerospace, rail, nuclear power plants. Q/A doesn't have to be a dead-end job, career-wise.


Go into security and turn that free bugreport into big budget bug bounties ;)

If you can find bugs in libraries used in IOS/Android or something, that's worth money and gives you the chance to increase security for millions (billions?) of people.


I have a similar thing with typos. Give me a page of printed text, and there's a surprisingly high chance I will spot a typo within the first few seconds of looking at the page, no matter where the typo occurs on the page.

(Might be confirmation bias, but I have been proofreading a lot of theses for my friends while in college, so it might be trained.)


Your superpower is both a blessing and a curse.

It's likely that your weekend projects are things you find interesting and it seems likely that the things you find interesting are bleeding edge. It's never going to get better. On the other hand, any good dev team would value someone who can not only find the bug, but patch it.

You need to get somewhere where using the latest and greatest is encouraged..


Yeah libraries are a different thing. I find bugs in python modules all the time.


Might make you suited to security consulting?


<quote>Does anybody know how can I turn this annoying superpower of mine into a job that is not basic Q/A? </quote> What would you say, if I told you that this superpower was not basic Q/A? because basic Q/A is what the compiler is passing, all the time. The part you meant with vigour was the <b>job</b> part right? how do you make somebody want your awesome enough to pay you a living wage? It's either apple or Intel, because they are the two people financially vested in llvm.


If you're the kind of programmer who inputs HTML into a comment system that doesn't support it...


I'm not in QA. You obviously head the snark department very well. Expect promotion, but alas, not much more pay.


The chance isn't that small, many people stumble over them and just use the first workaround SO turns up or think its actually their fault and work around it in other ways.


I had some bad experiences with visual studio's compiler recently (last two years.) I was working on OpenGL code and discovered a couple different constructs that would crash the compiler. The first time I figured it was a fluke, but by the third distinct case I became rather alarmed; the code should have produced errors, not compiler failures. I concluded that MS's compiler is decidedly less robust than GCC and clang; I've seen one compiler crash from a release GCC compiler in the last 10-odd years.


When I was in my first year of Computer Science (17 years ago), I had this CodeWarrior error that still makes me chuckle. I think I had a comment, assignment, and a System.out.println(); It wouldn't compile, had an error every single time. The tutors took a look at my code, and one-by-one gathered around until they were all there.

Eventually, someone (Natalie) moved the comment below the assignment, and it worked normally. I tried putting it back above, and it wouldn't compile. Can't remember what happened if I left the moved comment in place and put another above the assignment. I found two like that - one in CodeWarrior, one in Turbo Pascal a few years before that simply returned "error 0: no error."

It wasn't anything complicated in either case, just very basic stuff.


I've used a lot of C/C++ compilers over the years, and the two compilers marked in my memory as having enough bugs to be noticeable and disturbing were Visual C++ 4.1 (so notably bad that MS essentially apologized for its screwup and conceded that 4.2 was largely about fixing 4.1's problems) and CodeWarrior for Mac (pretty much any version, I think...I never tried the Windows version, but the thought of it makes me shiver).

CW had a pretty nice environment to develop in compared to its contemporary competitors, but it just didn't have a solid compiler.


This was pretty common in Turbo Pascal, your program would just suddenly stop working when you added a new blank line. (It would compile, but something would usually be off in the output causing the program to fail or crash).

Removing the offending line or adding a different mix of blanks/instructions would get you back to a working state again. Very frustrating.


Something similar happened to me a while back in Mono/C#. I think it was the order of assignments for an unused variable changed the results of the code. Something like:

    int x = 0;
    int y = 2;
    DoSomething(); // Didn't run with x before y
I can't recall if I ever reported it or bothered looking into it, but a friend ran into a similar issue around the same time and it was really confusing, but it was resolved by changing some configuration (different compiler version or flags I think).

Only other time I bumped into it was in Idris, which is to be expected frankly.


From what I remember of a deep dive post I read on this once, the ordering type bug / no-bug situations tend to be the result of incorrect optimizations. Changing the ordering will sometimes make the code look just different enough to go down a different (not bugged) optimization path.

If you ever catch them still there with a "no optimizations" flag, then that's a pretty serious bug!


Any chance the line endings were different and that was somehow messing it up?


Yeah working on FreeSWITCH, upgrading to VS2015 and we encountered an issue with a long series of else branches. Ended up having to split and nest for the code to compile.


In 2012 I used a C compiler where using printf to output a 64bit integer would corrupt the stack.

Not that hard to stumble upon, but it took us awhile to figure out what the heck was going on.


That sounds like a bug in the standard library... not a bug in the compiler.


Not necessarily. Could be a problem with passing 64-bit values to varadic functions generally. I could definitely imagine this happening on a 32-bit platform.


This has bitten me on QNX. I was porting UQM for fun, and it was failing when entering a dialogue,

Turns out, on QNX you can't pass short int into variadic function, which is expected behavior as far as C is concerned (you should only pass int or full size types), but it worked on every other arch so it took me a while.


The generated assembly was garbage, so I assume it was a compiler bug. It also got fixed in later revisions, but since the compiler is shipped with the CRT... :)

I don't actually recall if we were using the shipped CRT or not, I'm guessing only minimal parts of it were used, if any, since embedded projects tend to write their own subset of the needed standard library functions.


> If you’re the kind of programmer who is likely to find compiler errors, you probably know who you are.

A Swift developer?

Just yesterday I found myself compiling a new part of code in increments because all of the new code together wouldn't compile but adding the code and import first, compiling it before adding @UIApplicationMain would compile just fine. Exactly the same code that didn't build before.

I can't imagine what I would have done if I would have trusted that this was my error because the compiler would not make mistakes.

Though nothing compared to Swift 1.x, things tend to work generally and if the compiler does crash it'll crash on weird programming mistakes (like referencing type(of: self) before init() was called)


Nothing magical about writing a compiler. The humans that make them introduce bugs, like with all software.


A compiler in particular is a very extreme case. Most software is used to its fullest extent by its developer, 90% of bugs are likely to be found by said developer in their normal usage. Another step out, library developers often use their work, but often build in functionality they're not as interested in by request or as a result of generalizing beyond their specific use cases. They might not be the heaviest users of that software, and so aren't as likely to find some of the bugs.

Compilers are the most extreme case. Anything Turing complete is almost certain to have vast swathes of functionality which aren't used by the developer. If a compiler doesn't already have a large user base, then unless you're using it exactly like someone else has, you're almost guaranteed to find some bugs. Honestly that's true for most software, going back to the maxim "don't be the biggest user of any given software unless you're already capable of writing it yourself."


I remember reading that TeX uncovered a remarkable number of bugs in Pascal compilers back in the day.


That was true-er when by compilers one meant GCC or some such.

Now everybody and their dog write compilers and transpilers for young and/or obscurer languages (I'd include Go and Rust in there), and with the immaturity of these environments compiler bugs are a fact of life.


Nope. Wasn't true then, either. Found bugs in Lattice, gcc, egcs, Borland, Aztec, and probably a few more that I'm forgetting.

They're never common, but "it's never the compiler" doesn't apply either. It is, however, a helpful reminder that the most likely source of problems is with the author of the code :)


Care to explain the "bugs" you found?


Obviously broken code generation, in the sense that the code provably does not behave as it should under the spec. Usually as a result of optimization gone wrong.

If you're asking for actual examples - sorry, you're out of luck. The 80's and 90's are a bit too long ago for me to remember details.

If you prefer to believe in the magically flawless compiler fairy, knock yourself out.


Yep, I was going to comment that if you write your own compiler you'll find plenty of bugs, which is the same point.


Many years ago I found a bug in GCC. Reported it, it got fixed, I got thanked.


Yay, I've found a compiler error (and a hardware error too). Only one each, though, and it took me a while to be sure.


I think I had only a single compiler error in my entire career that wasn't about robustness (ex: code crashing the compiler instead of producing an error). Updating the compiler fixed it.

I'd rank the likelihood of an error like this :

- My fault

- API bug

- broken environment (ex: corrupted object files)

- hardware problem

- OS bug

- compiler bug

- bug in malloc()/free() ;)


clang/llvm makes the gcc developers look like amateurs when it comes to reckless optimizations for that extra percent in the synthetic benchmark. My goto approach for when something breaks in clang but works in gcc is to disable optimization, and when I can narrow it down to a single offending function, that gets optnone slapped on it and done.


> reckless

... is in the eye of the beholder. They exploit every corner of undefined behavior to the benefit of performance. Indeed other compilers might be "safer."

Keep in mind that they also provide you with UBSan to help you detect when you're doing it wrong.


That argument would make more sense if they actually were faster, which they often aren't. Last I checked (and I don't keep up with this, to be fair) they still lagged gcc on most macrobenchmarks on x86.


> they still lagged gcc

Isn't the "they" in this thread gcc, or did I misread something?


I took "clang/llvm may be reckless, but they're fast and have UBsan", to which I replied "gcc is still faster".


Oh, I thought the GP was accusing GCC devs of reckless optimizations. But UBSan is a clang project, so maybe you're interpretation makes more sense. Very confusing thread. :)


Sorry, I should've been more explicit when I responded to the original clang vs gcc comment.

Note that UBSan is (mostly? totally? originally?) a Google project and it's been ported to both clang and gcc.


I mean, if you use GCC does recklessly optimize away (some) security/safety code. This is one of the reasons why we don't use it for code we ship. I work in a safety-conscious industry, where an overeager compiler can literally kill people.


Umh.. the compiler writers don't "exploit undefined behaviour".

The compiler writers write the compiler with the expectation that the program being compiled is correct i.e. to the specification.

Now when the program fails to be correct, optimizations can result in "weird" results, which is not really incorrect since the input was incorrect.


UBSan catches undefined behavior at runtime - so by definition everything it reports was not used by Clang to fold your program to a constant.


When the sanitizer is on, the optimizer (modulo bugs of course) does not remove the checks.


> when I can narrow it down to a single offending function, that gets optnone slapped on it and done

If you're able to narrow it down that far why not then fix your bug?


Well, the last case I can recall where I had to resort to this used inline NEON assembly for what is essentially a fancy 2D memcpy. With optimization on textures were corrupted. I spent an hour diffing objdump output and got nowhere.

Essentially, it was not worth more of my time to figure out how aggressive optimizations were breaking the parts of a memory bandwidth constrained function that aren't worth optimizing in the first place.

For GCC inline assembly means "take your grubby hands off". For clang it's essentially an invitation to do whatever.


> For GCC inline assembly means "take your grubby hands off". For clang it's essentially an invitation to do whatever.

That's not true at all. Clang respects the clobbers entry on inline assembly and does not do whatever it wants. See GCC's documentation on it: https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Clobber...

Based off of your description you probably just needed to add the "memory" clobber to tell the compiler that any registers it might have kept are no longer valid.


When I was just getting into the industry, I was offered a job from a team at IBM, working on some OS or other whose name I forgot. I was chatting with the head developer about the workflow. He told me that, as the OS was written in a custom programming language, and the OS was the only program ever written in this programming languages, they found that roughly 50% of the bugs they encountered were in the OS code, and roughly 50% were in the compiler for the OS code.

I very politely finished the interview and then ran away as fast as I could and never looked back. I did not need that shit in my life. The world where it's always my fault is a better place.


As someone who's writing a programming language, I find this to be utterly terrifying. At least start with some small tools to make sure the basics work! Somehow I doubt they had a good testing suite too, which is so very important when creating a new language to prevent "this won't hurt anything" fixes from creating a hidden bug far away.


If you hadn't mentioned IBM, I would've guess this was Bell Labs, Unix and C.


IBM has built several operating systems and programming languages and compilers over the years. Probably more than Bell Labs, actually. It's not so surprising.


Unix and C is more known, so that would be my assumption too. In fact I can't name any IBM made OSes or programming languages.


Aix and z/os are popular operating systems by ibmsl still in use, not counting things like system/36. Lime is a research pl at I'm, there is also apl which was made at Harvard and later at ibm.


I can name a half dozen without googling. But, I'm kinda old and I remember when IBM was the dominant force in computing at nearly all levels.


FORTRAN. A pretty important one.


RPG


Comments here seem to indicate a lack of familiarity with IBM's contributions to our field. I used to work at IBM (as an OS architect) and I just thought that I would provide a few interesting tidbits of IBM's history.

IBM designs, builds, and sells computers. Once upon a time half the installed computers in the world were built by and designed by IBM.

## IBM in the Vacuum Tube Age[4] (1942-1963):

These computers were produced in such small numbers and ran primitive OS's that did little more than load programs into memory and handled the a few simple devices (card/paper tape reader and printer).

(1952) IBM 701 -- Defense Calculator (they still called computers calculators)

(1953) IBM 702 -- business computer

(1954) IBM 650 -- first mass produced computer, used a magnetic drum for memory, decimal architecture

(1954) IBM 704 -- first mass produced with floating-point

(1954) IBM 705 -- evolution of 702, for business

(1954) IBM NORC -- Navy Ordnance Research Calculator (most powerful computer in the world for 2 years, 9800 tubes)

(1956) IBM 305 -- first commercial computer with a moving head disk drive

(1958) IBM 709 -- FORTRAN first ran on this computer

## IBM in the Transistor Age[5] (1955-1968):

(1957) IBM 608 -- took years to develop, it was first demo'd in 1954

(1959) IBM 7090

(1959) IBM 1401 -- first computer to sell 10,000 units, eventually about half of the worlds computers were IBM 1401, see [1]

(1959) IBM 1620

(1960) IBM 4020

(1960) IBM 7070

(1961) IBM 7030 Stretch -- supercomputer

(1962) IBM 7094

(1963) IBM 7040

(1964) IBM 7094 II

(1965) IBM M44/44X

(1965) IBM 1130 -- IBM's least expensive computer at the time ($32,000 in today's dollars)

This is where programming languages and OS's get interesting. During this period manufacturers began developing OS's, Programming Languages and Compilers for their customers. Although, some customers in the Vacuum Tube age wrote their own simple OS's to help run their machines the computers of the transistor age had more capacity, were faster, and had a wider range of users. This meant that manufacturers had to provide system software with the computers they sold. Almost all system software of this time was written in assembly language. The first operating system code that I studied was in assembler. Some of IBM's Operating Systems from this time period (see [6]) were IBSYS (originally developed by General Motors to run their 701, IBM took over development of this OS) and the FORTRAN Monitor System for the 709, 7090 and 7094, a tape storage based OS.

I never got to program on any of the vacuum tube older systems although I think I wrote some FORTRAN for a 1401 (it's harder to remember because being a junior programmer I rarely got to even touch these expensive systems, you would drop off your drawer of cards at a window for the "operators" to load into the computer and then, perhaps 30 minutes later you would pick up the printed output at the output window). I used the IBM 1130 in school and for a few of my first programming jobs in FORTRAN. This was with an IBM OS and and IBM FORTRAN.

## IBM in the Modern Computer Age

By the 60's IBM completely dominated the industry which was described as "IBM and the seven dwarfs" (Burroughs, UNIVAC, NCR, Control Data Corporation (CDC), and Honeywell, RCA, and General Electric); IBM's market share was larger than the competitors combined. IBM's continued to produce new architectures and manufacture computers in great variety. Perhaps the most important mainframe architecture ever (in history) was the IBM 360. I've programmed this computer in IBM COBOL and IBM assembly language.

The IBM/360 evolved into the IBM/4300, IBM/370, and IBM/390, and eventually to the z/Architecture in use today on large enterprise installations (see [2]). Over the decades the hardware changed drastically and the evolution of operating systems meant that IBM had to produce many advanced operating systems to run these multi-million dollar systems. (In 1969 a typical IBM/360 model 25--at the very low end of the 360 line cost $1,753,000 in today's dollars, see [3]). Back in the early 70's I programmed on the IBM/360 in assembly language, PL/1, COBOL, and FORTRAN.

In the modern era, IBM also produced a number of midrange systems: System/3, System/34, System/32, System/38, AS/400, and the IBM i. These machines ranged from the low end System/3 designed to run IBM's report generation language RPG to the pioneering System/38.

The System/38 was an interesting machine with an IBM OS that contained an integrated relational database management system. The System/38 didn't have files, it had a single-level store where information where the distinction between memory and external storage was treated as a single unit. The System/38 was also one of the only commercial computers using capability-based addressing. This unique hardware/OS combination supported programming in IBM's RPG III, COBOL, BASIC, and PL/1.

Up until around 1984, IBM designed and wrote it own operating systems for all of its important computers. IBM had a number of important research projects going on in the 1970s and IBM's John Cocke, credited with inventing RISC although others were also working with these ideas around that time (e.g. the CDC 6600 designed by Seymour Cray in 1964 -- I still have the book I used to learn assembly language programming for the CDC 6600 a very interesting machine). John Cocke's work resulted in an experimental system known as the IBM 801. This inspired the development of the IBM POWER instruction set processors. Due to internal competition, the POWER/PC chip was turned into a system that supposedly would not threaten the important mainframe business.

The original Power/PC running Unix (IBM's AIX version) ran on the new hardware design and was supposed to fit in between the IBM/PC business and the IBM mainframe business. The hardware was brand new and being developed when I joined IBM. The OS was to be Unix based and despite the pressure to "make it better" most of the architects fought to keep it consistent with ATT and Berkley Unix. Unfortunately, the hardware was new, the only compilers that could take advantage of the RISC instruction set were research compilers running the PL/8 (a system programming language that was supposedly the 80% of PL/1 needed for system programming). Because the hardware was not stable, development of the OS was done in two layers. A low level hardware layer coded in PL/8 and the rest of the kernel coded in C.

It was a huge undertaking (from my perspective as a new OS architect), but it worked. AIX/Unix proved itself within the corporation and even the mainframe groups have adopted Linux and AIX for some of there systems. I left IBM to start my own company, but I learned a lot working during those exciting times.

A computer that wasn't that important to IBM was the IBM PC. It wasn't originally considered to be interesting enough to even cover it's hardware with adequate patent protection leading to a proliferation of hardware compatible competitors and indirectly to the age of personal computers. Even in the IBM PC world IBM developed OS/2 and operating system that briefly competed with Windows.

## IBM Software

All of these machines had their own, often unique, operating systems. These were written a variety of languages. IBM's PL/1 programming language lived on in the early versions of the low level kernel of AIX, I don't know if it is still there.

IBM has, in my humble estimation, written more OS's, more compilers, and built a wider variety of computers than any company in history. However, I'm open to evidence of the contrary.

[1] https://web.archive.org/web/20101105110749/http://www.comput...

[2] http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainfra...

[3] http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainfra...

[4] https://en.wikipedia.org/wiki/List_of_vacuum_tube_computers

[5] https://en.wikipedia.org/wiki/List_of_transistorized_compute...


That was an amazing write-up, thank you! That people are surprised IBM made operating systems makes me feel very old.


I bet it was one of their mainframe OSes, they are all written in PL/I variants like PL/8 and PL/S.


This is great advice, but needs a caveat. I'd say it this way:

The less battle-tested a part of the stack is, the more likely it is to contain a bug.

Suppose I'm using a database library written in Elixir to run a PostgreSQL query and I get a weird result. Where is the problem?

PostgreSQL has executed billions of queries in thousands of projects for tens of years and has many contributors.

The Elixir library is much newer and has many fewer contributors and has been used much less.

My brand-new query has been executed a handful of times by me and has only been looked at by me.

The chances are overwhelmingly high that the bug is in my code. If I can't find it there, the next candidate is the Elixir library. Only after exhaustive search in those two should I consider that PostgreSQL may have a bug.

Or to include more of the stack:

    my code > Elixir lib > Elixir > Erlang > PostgreSQL
The author's own anecdote with finding a bug in JavaScript's `sort()` illustrates this principle: it was not in (say) the Firefox or Chrome JS interpreter, but in a much less-used one: https://github.com/code-dot-org/JS-Interpreter/pull/23


This is a great way to put it, and very practical. Gives you a ranking of possible suspects, rather than completely ruling out any of them. You're allowed to suspect the compiler but only if other leads have been exhausted.


I definitely found a compiler error in an early version of Julia, luckily the solution was to upgrade to the newer release that I'd been too lazy to bump to.


Good example. "An early version of Julia" is a less-tested piece of your stack than, say, "a recent version of Java". Especially if the feature in question is something absolutely everyone uses in Java.


And the feature with the bug is a little-used part of julia


The new implementation is still wrong, according to the specification.

Array.prototype.sort does not sort an array of numbers as expected. In JavaScript, [2, 10].sort() results in the array [10, 2].

As MDN points out, "The default sort order is according to string Unicode code points... If [compareFn is] omitted, the array is sorted according to each character's Unicode code point value, according to the string conversion of each element." [0] It has an example showing off the unintuitive behavior right at the top of the page.

This behavior is intended per the original ECMAScript specification[1, pg. 68]:

  When the SortCompare operator is called with two arguments 
  x and y, the following steps are taken:
  1. If x and y are both undefined, return +0.
  2. If x is undefined, return 1.
  3. If y is undefined, return −1.
  4. If the argument comparefn was not provided in the call 
     to sort, go to step 7.
  5. Call comparefn with arguments x and y.
  6. Return Result(5).
  7. Call ToString(x).
  8. Call ToString(y).
  9. If Result(7) < Result(8), return −1.
  10. If Result(7) > Result(8), return 1.
  11. Return +0.
The key points are items 4, 7 and 8.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[1] http://www.ecma-international.org/publications/files/ECMA-ST...


Isn't that a separate issue? The article was pointing out that [5, 4, 1, 5] was sorted as [4, 1, 5, 5], which is wrong no matter if you're sorting numbers or strings. The bug was the algorithm taking one step too few and the fix was reducing the end trigger by one, which has no effect on what sort order is used. I'm seeing an actual bug that was actually fixed, and any breaks from the spec -- real though they might be -- are better brought up to the maintainers themselves than posted on something only tangently related.


What an awful gem of surprising by design.

Definitely going to add this to my contempt for javascript list.


It's the worst option except for all the others, I think. JS arrays can contain anything, so if the default behavior was to compare elements as numbers, sorting an array containing mixed types would have unpredictable results (because of comparisons to NaN, etc).

Naturally things are only weird for untyped arrays - calling sort() on e.g. an Int32Array does what you'd expect.


Python throws an exception for incomparable types. I guess that's not really practical on the web.


why is it better to silently fail than to throw an exception simply because it's running on a web browser?


Back in 2010 I wanted to try out Scala. I fired up the REPL and did this:

    scala> 1+2
    res0: Int = 12
Here is a blog post I wrote about it at the time: https://illuminatedcomputing.com/posts/2010/11/no-luck-tryin...

I was running under Cygwin and the issue was something about the Java-based readline library.

I filed a ticket (linked in my blog post but the link is broken now), but the maintainers' response was something like "Yeah right. Closed."

You'd think compiler bugs would be more obscure than 1+2. :-)


This reminds me of my favorite "bug" in my language. I was trying to show off string representations of arithmetic expressions, so I typed in this.

    <feep> .neatex string-of 2+2
    <feepbot> 23
That took a little figuring! Turns out, I'd forgotten my own precedence rules, and it'd interpreted it as (string-of 2) + 2; string-of, being a compile-time operator, yielded a string literal "2", which then implicitly cast to a string pointer, which then got incremented by two right past the end and into the next string, which happened to be "23".


The issue is now on github: https://github.com/scala/bug/issues/4009


> but the maintainers' response was something like "Yeah right. Closed."

That sounds pretty horrifying! But reading the issue (linked from the github issues linked by sibling), the actual response seems pretty reasonable. He looked into it and determined it was an issue with your setup, and asked for more information (if he couldn't reproduce it, what else could he do?). He also made a guess at what was wrong. You provided more information and a workaround (nice!) and a few weeks later he closed it saying he believes its now fixed. Seems ok to me.


> I was running under Cygwin and the issue was something about the Java-based readline library.

Apparently the problem is that the '+' character was being thrown out, so Scala only saw '12'.


This isn't a bug in the compiler... its a bug in the readline library.



This isn't related…


I seem to be cursed with the ability to crash any language interpreter I play with. My favorite so far was when I tried Smalltalk, mixed up / and //, and asked for a window that was 3/2 pixels wide (maybe not 3/2, but some non-integer rational width). The maintainers: "Wow! That bug must have been lurking in the bytecode interpreter since Xerox PARC."


You might think this is a curse, but as someone writing a language, people like yourself are a blessing. Exercising the weird cases that the original people overlooked and filing a bug is really, truly helpful.


I have worked in a domain where "it is often a compiler error" -- scientific simulation. In particular, Intel's C and Fortran compilers generally produce the fastest binaries on Intel hardware but they also seemed to break builds a lot. The answer to "the validation suite no longer passes" was often "use this specific point release of the compiler."

I wasn't personally developing the core simulation software itself at the time, just building it from source and writing extensions. I don't know if these frequent breakages were actually outright compiler bugs or just relying on conventions that weren't guaranteed by language specs. But I do know that switching to one particular blessed compiler revision was often the fastest way to pass validation and move on to your actual research problems.


Having to use a specific point release of the compiler to work does not necessarily means later version have a bug. It could also mean that the program is (accidentally) depending on implementation details that are not guaranteed by the language, and that the compiler is supposed to be free to change behavior about. For languages like C with a significant amount of undefined behavior, that's actually quite likely.


IIRC the intel compiler has the equivalent of gcc -ffast-math on by default, i.e. it explicitly breaks some the C and IEEE 754 floating point specs in some places in exchange for performance.


I was lead on a project that was making heavy use of the HTML5 filesystem API. We were transiting multiple GB through the API in a single session.

The program had a memory leak. It was about the same whether it was deployed through Cordova on iOS or node-webkit on OSX or Windows.

There was a _lot_ of code in this thing. A lot of code within the content (which was written by another vendor) and a whole bunch of code in our wrapper that was there to emulate legacy behaviour of the old ObjC app it replaced.

We assumed the leak was somewhere in our code or the other vendor's code. We spent _months_ trying to figure this thing out. I spent countless hours looking at heap dumps. Going over every bit of dumb code to try and figure out what was wrong. This was a big undertaking with a lot on the line from our client.

Then one day we upgraded the version of node-webkit. It happened to include a newer build of webkit. The bug _vanished_ on the OSX and Windows builds. That was our big break. All of a sudden the possibility of a compiler bug was on the table.

We ended up tracking it to a bug in webkit's implementation of (iirc) the html5 filesystem API. Nobody used it much and as far as I can tell, we were the only ones using it as heavily as we were.

Happy days for OSX and Windows, but the iOS build was still horrible because iOS was using a webkit build from before the patch.

We begged and pleaded with Apple to give us some indication of whether or when the build would be updated. wkwebview came and it was still busted.

The project ended up being closed down, in no small part due to the iOS bugs. I moved on. I have no idea to this day whether or not the iOS build has been updated or not.


You sure this wasn't just karma for attempting something as absurd as using Node and HTML5 FileSystem in a native iOS app?

BTW, what you're talking about isn't a compiler error.


No node in the iOS app, just javascript.

It was pretty out there to use the HTML5 FS API the way we were. It allowed us to share a _massive_ amount of our codebase between iOS, OSX, Windows and later Android though.

The development efficiencies we gained from doing so were pretty incredible. Given the budgetary constraints on the project if we hand't pulled off the code sharing piece it would have been shut down a lot sooner.


> BTW, what you're talking about isn't a compiler error.

Neither was the blog post:

> [ Addendum 20171113: Yes, yes, I know sort() is in the library, not in the compiler. I am using "compiler error" as a synecdoche for "system software error". ]


That's not even what that word means.


Yes it is. A "synecdoche" is where you use a part to refer to the whole (or vice versa).


This comment has me rather confused. iOS updates webkit at the same time that macOS does; namely, when releasing a major OS update. And they both use the same webkit too. It should not be possible for a webkit bug to be fixed in macOS but still be unfixed in iOS after a major version update (and since you say "wkwebview came" this means there was a major version update).


The OSX app wasn't using Safari's webkit. node-webkit bundles it's own copy of webkit.

wkwebview quite possibly brought a major version update, but it sure didn't include the patch we needed. Whether that was because Apple works from a fork that didn't include that patch or the major version was still older than the version we needed, I don't know. There was just no way to get that kind of information out of them.


Could node-webkit have been using a fork? Even if Apple is using a fork internally, surely they must sync up with the public WebKit project periodically, and especially when releasing new major OS versions.


The patch we wanted was in mainline WebKit. Once we knew what we were looking for we found the issue in their bug tracker pretty quick.

iirc there was some way to figure out what version of WebKit a given iOS device was using and it was definitely pre patch.

Surely they must sync up, but when? Unless you work inside Apple there's just no way to know. Pretty frustrating as a developer trying to build for their platform.


node-webkit bundles its own instance.


Cordova would implement the file system API for iOS not WebKit IIUC


"The compiler did not have a bug. The compiler never had a bug. The bug was always in the programmer's code and usually in their understanding of the language... It was caused by a misunderstanding of the way arguments to unprototyped functions were automatically promoted..." - I'd say the bug is neither in the compiler nor in their understanding of the language but in the language itself then. Why on Earth do we need a language that we can't use safely without "understanding of the way arguments to unprototyped functions are automatically promoted"? IMHO the number-one principle to be followed in every sane language design is the "principle of least surprise" (POLA). If the language can surprise you easily and lets you write weird code that is going to work a way different from what one will probably assume intuitively - that's a language made wrong.


> Why on Earth do we need a language that [...]

Have someone sit down with you and explain how slow computers were in the 1970's. C won because it could be efficiently implemented on everything, produce better code than pretty much anything else, and still produce effective software quickly. And everything after that is a network effect because of all the inherited C.

Your complaint has been repeated ad nauseum by literally generations of programmers who simply can't imagine why everyone did things this way, and who then go on to productive careers writing or integrating or just living with software still written in C.

If you want to change the world, you need to stop pontificating and roll up your sleeves. There's a whole lot of code to write in rust, or whatever your particular brand of zealotry demands.


Mainframes had bounds checks. It didn't cost much, but C omitted them. Some of the undefined behavior in C particularly around accesses and overflow could be changed to implementation defined to fix some of the more footgunny parts. Until C 99 C could not vectorize nearly as much as Fortran. C won because of Unix network effects and the PDP, not because it was that good a choice.


Surely someone had checked loads. The IBM 360/370 did not (unless it was some optional thing they sold -- certainly it wasn't part of the basic architecture), so "mainframes" seems rather spun. Likewise, yes, C's floating point calling conventions sucked and Fortran continued to see use in numerics work for a long time. But at the same time no one was writing performance-sensitive vector code for their Cray's in high level languages, ever. And frankly no one even thought to until GPUs made low level code impossible.

I'll say it again, C won because:

1. Compilers could be (and were!) written for any byte-addressible architecture by one hacker in a basement...

2. ...that produced output code quality comparable to what you could get anywhere else in the industry

3. ...and take advantage of the huge community of C hackers.

Different languages running on supercomputers and non-360 mainframes wouldn't have changed any of that.


1 and 2 also seem true for Pascal. Network effects are important but they would have worked for any language. Tony Hoare discussed bounds checking on the Burroughs mainframe.


> "principle of least surprise" (POLA)

I'm astonished that "principle of least surprise" would be written "POLA".


Why on Earth do we need a language that we can't use safely without "understanding of the way arguments to unprototyped functions are automatically promoted"?

Backwards compatibility and uses that are niche but vital. Programmers insist on using parts of languages long after they're discovered to be bad ideas, or when they're included in the language for very specific use cases that aren't relevant to 99% of programmers. Those parts of languages are catnip for "real" programmers.

"You'd be foolish to use this technique, and you'd have to be stupid to need it for this application."

"That's a matter of opinion, but it's fact that I'd have to be clever to make it work."

Can't argue with someone who thinks like that.


Relevant quote:

Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

Brian Kernighan


What’s surprising for you might be hardly surprising for someone else. Intuitively `0.1+0.2==0.3` should be true in every language. Now fire up REPLs and test that and the number of common languages where that is actually true can be counted with just one hand.


Paragraph breaks, please.

> If the language can surprise you easily and lets you write weird code that is going to work a way different from what one will probably assume intuitively - that's a language made wrong.

I agree that much of what is unhelpful about C could be changed without compromising its principles. Implicit conversions, for instance, and implicit fall-through.

Fortunately there are compile-time warnings and run-time checks (such as Clang's 'ubsan') to take the edge off.


I'm pretty confident that every language is made wrong, then. Unless you go super minimal, there's just no way to guarantee that features won't interact in surprising ways for some given person's expectations.


Some people seem to believe that you shouldn't use a language unless you're capable of writing the language yourself.


They should have never started using their mother tongue then


I've only ever hit one compiler bug that was really a bug: a C compiler for 6805 microcontrollers would only produce a correct function-start if the function took 2 bytes or less of arguments. That particular project includes a surprising number of global variables :-)

Not a "compiler bug" but a "gosh I wish some people who are working on a project voluntarily that I'm not paying for would do a crap-load of work" bucket: for a while GCC could either compile a program with threads or with exceptions. If you had both threads AND exception code (even if you never threw an exception), the emitted code would eventually crash.


About 9-10 years ago, I worked at a company maintaining an application written in C. There were three other programmers there who had been with the company from the beginning, which at the time was ~15 years. They used the OpenWatcom compiler.

A few days in, I discovered that they compiled all their code with all compiler optimizations disabled. They had done so ever since they encountered a bug in the compiler/optimizer where the compiler would somehow think that a boolean expression involving floating point numbers in the head of an if-statement would always evaluate to true or false and thus optimize away either the if-part leaving only the body, or remove the whole if-statement.

I did a little bit of research and found what I think was that bug in the release notes to an earlier version, but even so the other programmers were adamant in keeping optimizations disabled.

(FWIW, I once did a build with all reasonable optimizations enabled, and the improvement was miniscule. Either our code sucked so hard the optimizer could not do much about it, or it was so good there was nothing left for the compiler to optimize. I doubt it was the latter, though. ;-) )


...or the compiler had a worthless optimizer?


Maybe.

Watcom used to have a reputation for generating fast code, but that was in the 1980s, early 1990s. The impression I got was that by the time I started working at that company, OpenWatcom was mostly around because it was very good for compiling code for DOS.

It supported all of those weird memory models from the days of segmented memory, it had its own 32-bit protected mode which I assume many programmers back then welcomed enthusiastically.

Plus, I read somewhere that OpenWatcom is practically the only compiler that can create DOS executables to be stored in ROM (for embedded / industrial applications, I guess). Or something like that.



I immediately thought of this work when I saw the title. Of course, you're actively and effectively hunting down bugs, not accidentally running into them while doing other things ;)


Yes, but for example I remember at least one anecdote where one of our bugs we encountered was closed as a bug that manifested itself as a crashing bug in x264 or ffmpeg (I forget which) and ours was closed as a duplicate, so people do actually encounter bugs.


Back when i wrote software for GSM mobile phones (2000'ish), we spotted several compiler errors in our "Infineon E-Gold" C compiler.

If you declared something like :

    int a = 1 + 2 + 3;
The result would simply be "a=3". Turned out the compiler threw out any argument besides the first two.

Another error would be the compiler failing to increment the segment pointer (16 bit platform), and the offset pointer simply wrapped around. This didn't show itself until we added a new 120x90 pixel four color display, meaning our display shadow buffer grew to a staggering 20 KiB, and depending on the build, the display buffer would become corrupted.

Certification for GSM compilers is expensive, so we just learned to work around it, i.e. we reimplemented our stdlib memcpy, memmove, strcpy, etc functions in assembler.


> Infineon E-Gold" C compiler

Here's your problem

Hence why GCC ate most of the compiler market

Hardware companies building software usually goes bad unless they're really really good


Wrapping of offset pointers in this manner is usually an (documented) feature not a bug. At least on DOS compilers, where this is also usually configurable (this is part of what "C memory model" option of 16bit DOS C compilers is about).


The company closed the Danish branch in 2003, so it's been 15 years, and i can't remember the compiler options we used, but we looked deeply into the problem back then, and it was acknowledged as a compiler error (at the time).

the offset pointer problem however was the least troublesome. The problem with disappearing arguments took a long time, and a lauterbach hardware debugger to figure out :)


This goes for other things as well, not just compilers.

As of this moment I am begrudgingly creating a support ticket with AWS for an issue which I am 99.99999999% confident is in our VM and has nothing to do with AWS itself.

For some reason, "AWS issues" is one of the first things raised, even though we are unlikely to be hitting AWS edge cases, not being Netflix-scale.

Honorable mention: "Kernel issue"


AWS is nowhere near as reliable as a compiler. We've seen things ranging from "S3 intermittently takes 60 or 120 seconds (or some multiple thereof)" (but apparently only for my buckets…) to spatial indexes in Aurora just flat out didn't work when they were released, to S3 acknowledges writes prior to them being available to be read (causing a reader who picks up the item to receive a 404), to ELBs corrupting the HTTP connection depending on packet timings.

That's aside from the more common issues of "this VM is just going to bite the dust", which is something I feel is just inevitable.

The kernel is more reliable, IMO (except when it comes to the Thinkpad TrackPoint drivers).


> S3 acknowledges writes prior to them being available to be read (causing a reader who picks up the item to receive a 404)

That's exactly documented behavior: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction....


Huh. TIL. I swear we searched the docs when we ran into that, and found the same consistency guarantee minus that rather large caveat, but Wayback says that was there at the time. Perhaps we were looking at a different page, but it does appear this is and was documented behavior.

That's a hugely unuseful caveat in practice though. I'd really love to have consistent reads.


And very visible in practice. In one case we had a bucket with about 50,000 documents posted to s3 the with 204 created replies. A separate process runs to verify the uploads has 10 or so outliers taking 30s or longer to for a GET to see them. One took 10 tries spaced 30s apart (so 5 minutes) to show up. The next 50k the biggest outlier was only 5s.


Of course, there's also the point that if utterly pathological behavior is documented, then it's not a compiler error. I remember a c++ compiler that had a hard limit on the number of objects allowed in the programs it compiled. It's documented, "not an error".

The compiler actually crashes rather than returning anything? Well, that's more likely to happen when your code has lots of errors in it. "Don't look at me like that. Go fix your code and then come and complain..."


Three compiler codegen bug war stories:

1. libdivide found a gcc 4.8 vectorizer bug in high multiplies. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61108

2. Circa 2001, I decided to learn Java. My first-ever Java program performed a right shift by -1 (for some reason I can't recall but was probably really stupid). The spec requires this to be equivalent to >>31, but the compiler wrongly optimized it to >>0. My confused post can probably still be found in comp.lang.java.help.

3. I wrote a Python script that generated a large set of test cases for an FFI (NSInvocation). The test crashed inexplicably on PowerPC64 with a nonsense backtrace. Turns out the PPC64 jump instruction has a 24-bit literal offset, and my functions were so large they overflowed it. That got fixed (also in gcc).


Funny that he mentions that any bug in perl's if statement would be caught within a year.

I had a project that lost $150,000+ because of a bug in perl's sprintf (but not printf, mind you).

https://rt.perl.org/Public/Bug/Display.html?id=82418

Yep, that ticket was open for well over a year.


We just had the article on what a nightmare Unix Locale is.


I dislike the glossing over of the distinction between a compiler bug and a library bug. For me there's a definite spectrum:

* OS - I've never found an OS bug that wasn't widely known

* Compiler - I've found 2 compiler bugs in my career to date (both of which crashed the compiler, making this very clear cut)

* Standard libraries - I've found a moderate handful of bugs in standard libraries, somewhat proportionate to the code surface.

* Third party libraries - I rarely use a third party library without exposing a bug.

* Applications - I find bugs in almost all of the applications that I use.

* My own code - Most of my professional life is spent debugging my own code.


> The sort() function was using a bubble sort. (This is of course a bad choice, and I think the maintainers plan to replace it.)

There are some valid use cases for Bubble Sort for small and almost sorted arrays (although Array.prototype.sort sounds like a very general solution, which I suppose is not a good idea). I've seen this in JavaScriptCore where they use Cocktail Sort (a variant of Bubble Sort) for sorting in some situations. They also added some performance numbers and explanation in [1] that show that it's faster than e.g. std::stable_sort. I actually tried to write an Insertion Set (which is supposed to be better than Bubble Sort), but couldn't get it to perform better than their Cocktail Sort implementation.

[1] https://github.com/WebKit/webkit/blob/master/Source/WTF/wtf/...


Doubt whoever wrote the library sort() was optimizing for long tail use cases.


My favourite bug came from a university assignment on autonomous robotics. We had to build a small robot using various sensors, such as some whiskers which could be used to detect some metallic tape stuck to the floor of the arena.

One of the other teams found that their robot kept freezing after a few minutes of moving around the arena, I assumed this was due to exhausting available file handles or some other bug in their code. But after some time they realised that the bug only happened when the metal castor they had used on back of their robot touched the metallic tape.

Sure enough, when they replaced the castor with a bit of lego their robot continued to run perfectly for the duration.

The lego construction of the robot must have been building up some static charge which was then discharged when running over the metallic tape and taking out the ARM computer when it did so.

I'd have never believed that one unless I'd seen it with my own eyes.


When using a new version of a compiler, and existing code stops working, it is often a compiler error. When different compilers give different results, it is often a compiler error.

I've hit dozens or hundreds of these, across many languages for decades.

It's not the first thing to check, but once you verify the code is logically correct, that can be the only conclusion.

Compilers aren't excessively complex code, but all code has bugs.


> When using a new version of a compiler, and existing code stops working, it is often a compiler error. When different compilers give different results, it is often a compiler error.

To be fair, sometimes that's because the compiler became more strict and your code had an error that wasn't being caught. Sometimes it's the compiler taking a more liberal stance on what "undefined behavior" is for performance reasons. In both cases, that would still a programmer error (and I think those are likely the more common case than an actual compiler error).


At my previous job a colleague showed me some C++ code and asked if I could help him spot the issue. I looked at it and said it looked fine. He ran the program and it crashed.

We ended up staying late in order to try to find a minimal reproducible example. We finally managed and found that the cause was a basic trigonometric function in the standard library giving the wrong result.

At first, we couldn't believe it. Then we googled the issue and found a bug report about the intrinsic version of that function not working properly. With Microsoft confirming that it didn't and that they weren't looking to fix it. We disabled intrinsics in that section of the code and the original code worked as intended.


I have hit a number of C/C++ compiler bugs in GCC, MS VC, Code Warrior and various in other languages (for example shader compilers). I agree it is the sort of the last thing you blame, but at the same point, sometimes it really is, so I get a couple of sanity checks from skilled friends or colleagues, making a tiny repro, submit and move on!


Am I the only one who was expecting to read something about compiler errors but instead found an article about a bug in a JavaScript polyfill library?


Certainly I've seen compiler bugs.

1) Borland Turbo Pascal, where code would not execute correctly in the presence of a comment near the top of a loop. Deleting the comment caused the correct loop execution.

2) Texas Instruments DSP C-compiler sometimes failed to unroll loops correctly. Certain loop forms needed to be avoided as a result.

3) Recently I've been working on a C-compiler written in Golang, and the grammar definition was raw enough that it definitely had bugs. The bugs were fortunately obvious in that it would refuse to compile valid C code.

Of course, 99.98% bugs were mine.


I'm more shocked that bubble sort is used in production somewhere...


A common optimization is to use insertion sort for small arrays (less than 64) and quicksort/heapsort/introsort for larger ones. Usually because better algorithms in terms of Big O have worse constant factors.


Last time I looked at the source for Java's dual pivot quicksort implementation, it used exactly this for 7 elements or fewer


Yes, and qsort does similar (different algorithms for different array sizes but the same function interface).


For me it's that they aren't testing it. This sort of functionality is so trivial to write unit tests for, so simple that it's the fastest and easiest way to write and debug the sort method.

It'd make me question the stability of the rest of their product too.


Actually, that's a fascinating part of the story. We run the entire test262 ECMAScript Conformance Suite (https://github.com/tc39/test262) against our JS-Interpreter fork. We don't pass the whole thing yet, but it ensures we only ratchet toward conformance to the language spec.

We checked, and this particular bug was not caught by the test262 suite - the broken sort worked for enough cases to slip past the Array.prototype.sort tests there. We've added a regression test for this issue in the layer that consumes the interpreter, but I've thought about proposing more aggressive validation of sort behavior to the test262 project.

It was a great reminder to me that while tests may be necessary for quality, they are not always sufficient!

Brad, Code.org Engineering


This is in a custom interpreter (used for teaching I think)

https://github.com/code-dot-org/JS-Interpreter


I think we need to get away from this attitude, that I used to hold, that O() is everything. It isn't. It isn't even close. We should instead wonder why an algorithm was chosen without regard for the nature of the data on which it will be applied. For sorting O(n lg n) only makes sense as a selection criteria if you have absolutely no idea about what data is being sorted. Why is that?

If the answer is you just don't care about the perfromance and can say why it isn't important (at least yet) as long as you avoid the pathological then that's a good answer. It's a really bad answer to assume it when you can't. IMHO.

Others have pointed out that bubble sort is great when your data is already mostly sorted. The textbook example is check stub ids, which are sequential - probably an outdated example nowadays. The benefit is greater than a O() analysis would have you believe because running sequentially through contiguous memory is something computers are just great at doing due to memory caching and prefetching. Minimising cache misses /may/ dominate the number of passes of the data in performance analysis. If you don't care about performance then an O(N^2) algorithm may be also something you don't care about.


> Others have pointed out that bubble sort is great when your data is already mostly sorted

It really isn't, compared to insertion sort. https://users.cs.duke.edu/~ola/papers/bubble.pdf shows insertion sort performs 5x better. I also like Knuth's Quote in TAOCP: "[A]lthough the techniques used in the calculations [to analyze the bubble sort] are instructive, the results are disappointing since they tell us that the bubble sort isn't really very good at all. Compared to straight insertion […], bubble sorting requires a more complicated program and takes about twice as long!"

> running sequentially through contiguous memory is something computers are just great at doing due to memory caching and prefetching

Name one sorting algorithm that doesn't involve purely running sequentially through contiguous memory.


You're misidentifying the attitude. The problem is that Bubble Sort is a terrible algorithm, there's no reason not to use Insertion Sort or better instead except for some reason CS programs still inflict BS on students so that's what they remember when they just need to sort something.

http://warp.povusers.org/grrr/bubblesort_eng.html

http://warp.povusers.org/grrr/bubblesort_misconceptions.html (Bubble sort is not great when your data is almost sorted. Insertion sort is, though.)


To the second link

"Bubble sort always performs n-1 passes through the data (where n is the amount of elements), and it always performs (n-1)+(n-2)+...+1 comparisons regardless of how the data is organized to begin with."

That is just false. 1 pass and n-1 comparisons then stops for already sorted data is what you get. An implementation that doesn't show that characteristic has been either deliberately or accidentally pessimized to behave poorly. You can gimp any algo to make it look bad. Quicksort without randomization on a pathological case is O(n^2) for example and nobody cares for the overwhelming majority of practical purposes. If your definition of bubble sort is that it must be O(n^2) always then we're not using "bubble sort" to mean the same thing and it's pointless to continue the discussion.

Bubble sort is usually operates on data in place. I don't know how to do an efficent in-place selection sort for mostly sorted data, maybe it's possible? You pay for additional copies in both space and cpu cycles. 2xn space for selection usually - but maybe this can be avoided without paying a very high time overhead, (eg find the spot, ripple all data below the spot down to the hole where you selected to open a slot for insert or similar).

Count the memory operations, ie number of memory reads and number of memory writes for a given set of data comparing the two algorithms.

Suggestions: Write 50 odd lines of C code to see the actual effect in cpu cycles. Craft data to make each algorithm dominate the other, it's really interesting to see.

I'm disinclined to say "Never uses algo X, always use algo Y!" Because it's really hard for that to be sensible given all the various shapes and sizes input data takes. Know your data is always really, really good advice.


The other reply thread is great and has a SmarterBubbleSort implementation -- it's important to note that the early stopping rule is an enhancement, not vanilla bubble sort (I had to learn both, maybe curricula these days jumps straight to the smarter version?), but even so it complicates the implementation and still isn't ever better than vanilla insertion sort.

Here is insertion sort

    void insertion_sort(int s[], int n) {
      int i, j;
      for (i = 1; i < n; i++) {
        for (j = i; j > 0 && s[j] < s[j-1]; j--) {
          swap(&s[j], &s[j-1]);
        }
      }
    }
You can even improve this further (some improvements have new names), however its base form is not in the class of bad algorithms to never use. Besides Bubble sort, Bogosort comes to mind. And there are of course nice algorithms to use if your data works for them, Radix sort comes to mind. Knowing your data is good advice, knowing never to use bubble sort is also good.


> Bubble sort is usually operates on data in place. I don't know how to do an efficent in-place selection sort for mostly sorted data, maybe it's possible?

What about insertion sort? Which always performs fewer comparisons and swaps than bubble sort for any array, by the way.


So I made the case with evidence that it doesn't always perform fewer comparisons and it it doesn't always perform fewer swaps. While it probably does, depending on implementation details use more memory. Do feel free to actually make your case with evidence, that I would be interested in reading.

What is it with this thread and "nyer, you're wrong" responses? It's not helpful, not clever, not useful and not pleasant to read for anybody. Finding out you're wrong is great, I learn stuff.


> While it probably does, depending on implementation details use more memory

Have you actually looked at the insertion sort algorithm? It doesn't allocate any additional memory. Neither does bubble sort or selection sort.

Knuth does some analysis of Bubble Sort in Volume 3 of the Art of Computer Programming, which proves bubble sort performs more comparisons and more swap operations than insertion sort. To quote the conclusion: "[A]lthough the techniques used in the calculations [to analyze the bubble sort] are instructive, the results are disappointing since they tell us that the bubble sort isn't really very good at all. Compared to straight insertion […], bubble sorting requires a more complicated program and takes about twice as long!"


1,2,3,4,6,5,7

6 comparisons, 1 swap with bubble and it stops. I make it 7 memory reads and 2 memory writes.

Insertion and selection sorts are going to run in half that time? I'd really like to see that! I think Knuth might be talking about a general case of any data rather than the topic being discussed here, but no, I haven't read Knuth on bubble sort so I can't be sure. I may do so now so thank you for the pointer. Arguments from authority really can be informative and useful despite their other shortcomings.


Bubble sort performs 11 comparisons and 13 memory reads in this case, actually. Because on the first pass through the array it performed a swap, it has to loop through the array a second time to verify the array is completely sorted.

To avoid confusion, here's the bubble sort algorithm:

  procedure bubbleSort( A : list of sortable items )
    n = length(A)
    repeat
        swapped = false
        for i = 1 to n-1 inclusive do
            if A[i-1] > A[i] then
                swap(A[i-1], A[i])
                swapped = true
            end if
        end for
        n = n - 1
    until not swapped
  end procedure
As you can see, after performing a swap during the first pass through the array it has to go through again (though it can skip the last element because we know the last element must contain the largest element in the array)

Insertion sort will perform 7 comparisons, 1 swap, 7 memory reads and 2 writes.


You can avoid the second pass...


No, you can't, because at the end of the first pass you have no way to know that it is in sorted order. Go ahead and try to write the pseudocode for your bubble sort implementation that avoids the second pass.


To be concrete, the array in order 1235647 has the first six comparisons go the same way as in 1234657, but it doesn't end up sorted in one pass.

Perhaps harry8 was thinking of the bubble sort implementation in the OP, which would indeed stop after one pass in these cases.


you look back and see if you will need a second swap and set your "need another pass" boolean that you named "swapped" accordingly.


Hu. Bubble Sort. And not only that; a buggy version!

And if you believe sequential access is gonna save you from O()... you're so wrong its not even funny. Of course the curves will cross latter. But they will cross. Especially on a sort.


I'm not sure you've demonstrated anything here about the topic under discussion. You might like to try achieve a better result around here, people tend to appreciate it.


I've seen all the O(n^2) worst case algorithms that reduce to O(n) on nearly sorted lists used, for example, in sweep and prune algorithms in computational geometry.


As a possibly more surprising example, readittwice posted this above: https://github.com/WebKit/webkit/blob/master/Source/WTF/wtf/...


I once used a bubble sort when I knew the typical array size would be 4. I think someone later replaced that code when the original assumption became outdated and we were sorting larger arrays.


For nearly sorted data, bubble sort is quite good. Simple implementation and near linear performance. Has my seal of approval.


Insertion sort is almost always better isn't it?


Bubble sort is also good for keeping a list close-enough-to-sorted with only small incremental costs. It was an old graphics engine trick to run just a few passes of bubblesort over your sorted triangles array before rendering each frame. Occasionally you'd get artifacts but generally it worked well enough and was fast.


Could be a small data set


I still remember my First Time. As a very novice coder (it was in the 1980s so I was young) I was using Microsoft C 3.0 (I think) and couldn't believe my program didn't work. It took understanding the assembly to discover that it was a compiler bug. It shook me to the core; before I had never even imagined that something like this could be wrong.

I later did find bugs in GCC (version ~ 1.37), but since it's pretty robust these days, notably the regression suite is a lot better than it used to be.


Finding library bugs, though, is routine. Most languages today come with a large collection of libraries, most of which work for the common cases. In an API with a thousand calls, probably at least 5% of them have major bugs.


Just a week ago I remembered reading an article several years ago about someone who ended up digging in the compiler (probably GCC) and against all odds unearthed a bug in there. As I recall it was quite involved and an interesting read but I can’t seem to find it again. If anyone has a recollection of such an article I’d be grateful to be sent a link in my general direction.


The article mentions another Coding Horror one at the bottom [0]. I found this tidbit from there interesting:

"In Code Complete, Steve McConnell cited two studies that proved it:

"A pair of studies performed [in 1973 and 1984] found that, of total errors reported, roughly 95% are caused by programmers, 2% by systems software (the compiler and the operating system), 2% by some other software, and 1% by the hardware. Systems software and development tools are used by many more people today than they were in the 1970s and 1980s, and so my best guess is that, today, an even higher percentage of errors are the programmers' fault."

I'd really like to learn how to track down hardware bugs, or mess with things like Christopher Domas does.

[0] https://blog.codinghorror.com/the-first-rule-of-programming-...


There's a story I heard that I can't track down anymore, about a game company that kept getting strange unreproducible errors from customers. They built a RAM test into their game and discovered that most of those errors were coming from bad RAM. 1% of a lot is a lot!


Doom 3 inadvertently did this for me one time. It loads data from pak/zip files and does a checksum verification. Which is how I discovered a defective RAM stick I'd used in a new system build.


I wonder what the studies considered an error. Are syntax errors counted, or did it focus on runtime bugs? I imagine that syntax errors would really skew the results if they were counted.


This is a good point. Methods matter. I tracked down the exact references:

[0] A. R. Brown and W. A. Sampson, Program debugging: the prevention and cure of program errors. Computer Monographs, 18. 1973

[1] Thomas J. Ostrand and Elaine J. Weyuker, Collecting and categorizing software error data in an industrial environment. Journal of Systems and Software, Volume 4, Issue 4. 1984

I couldn't actually get a hold of [0]; however, [1] was easy to find, and it goes into great detail about it's categorization of "faults". In fact, the abstract explicitly mentions that it catogorizes and analyzes 173 faults. On page 7 (page 295 of journal), we finally get to your question:

"Of these 171, 15 were attributed to clerical mistakes, such as incorrect typing or misunderstood handwriting."

So 15/173, or about 8.7%, of the faults they found were of the kind you mention. Interestingly, from earlier on the same page, "... 12 represented compiler or operating system faults," so only 20% less frequent than "clerical errors".

This paper also compares with a lot of other studies, so it's worth reading if you're interested.

Thanks for asking a good question!


The only error I found where the code compiled but was wrong was from it assuming a particular symmetry in the instruction set. It tried to use an addressing mode on an instruction that didn't support it and consequently output two completely different instructions.

Errors in the run-time library and errors that crash the compiler are much more common. While many eyes cover a lot of search space. The space they have to cover has grown immensely.

If you step off the trodden path you can find problems much more often. Just recently I encountered a function in a run-time library that freed the amount of memory the alloc asked for instead of the amount of memory the alloc function actually returned (Sometimes it returned slightly more if the remaining space was not enough to hold a free-memory structure).

It would not surprise me if the population using that particular compiler/platform combination numbered less than a thousand.


I guess "it depends".

If you are beginner dev (I would say over 90% of all devs) or experienced but never try to stress out your stack, it is not very likely that you are going to encounter and properly recognize an error in your compiler. Even if you encounter one, you will most likely change something until it starts to work again and then you will shrug it off.

On the other hand if you are very experienced, you are routinely writing code that stresses out your compiler (think algorithmic trading, etc.) and you are in a habit of trying to get to the root cause of every failure, you are very likely to find errors.

No non-trivial software is without bugs and compilers are some of the most complex tools used in our job. Because they are very complex and heavily used, all low-hanging bugs are already explored very well so what's left is typically very complex and requiring expertise to recognize and debug.


Refuted by anecdote ;) In the first few months of working at my current job I triggered three different bugs in the Scala compiler (2.10.x IIRC).

That said, I think most people - me included - don't reach for "compiler error" until we've exhausted every other possibility.


Well, scala compilers are much less mature than C compilers


There are more compiler bugs than many ever think. From mainframe COBOL compilers to gcc, I have encountered bugs in programs that were a result of what the compiler did.

The original source code did nothing special - it was simple code and due to specific optimisations that the compiler performed, inexplicable seemingly random errors would arise. One would have to constrain the optimisations the compiler would do to get the code to execute correctly.

No compiler is bug free. The bugs can manifest in strange ways and sometimes they will manifest only on specific kinds of data (edge case data).

Certainly there are many programs that are written incorrectly and give rise to apparently inexplicable errors that are due to misunderstood language features or specified compiler limits.

However, there are also many programs that do not have such source code errors that end up being rewritten in some way to avoid compiler based errors.

The worst I have seen is when dummy statements have to be put in to the source code in order to get the program to work correctly. I have seen compilers change the precision of numbers from one version to the next and you have to add additional code to restrict those numbers back to the specified range. I have seen compilers allocate multiple i/o buffers as an optimisation that causes a program to fail when a single i/o buffer allocation works fine - one has to turn off the optimisation - no documentation available as to how the compiler generated code uses those buffers.

All I can say is that it takes a fair amount of investigation to determine what kind of bug it is when it is subtle and randomly occurring. It can be everything from errors in the source to errors in the libraries to errors in code generation to errors in the o/s (including driver errors or memory errors).

Anyone who blanket says the compiler is not to blame is living in a rose coloured glasses world. Compilers are complex programs and the source code for them can be just as wrong as for any other program. The code generated for compilers can be just as bug ridden as any other program.


C++ is notorious for exploiting what is technically undefined behavior to produce unexplainable optimization bugs. Undefined behavior is extraordinarily easy to write into your code without knowing it, and once it's there the compiler is free to do any crazy thing it wants. You think it's a compiler bug because turning off the optimizations fixes it, but it's not.


C++ (and its ilk) is a language that I don't touch with a 40 ft barge pole. Irrespective of the aficionados of the language, it is poorly designed in oh so many ways. I would rather program in BF, befunge, or remorse than C++.


List of times we've seen the loch ness monster here? Fun!

gcc's __attribute__(aligned(N))

Whenever you see it if you assume it doesn't do what you think it does from reading the docs that's probably a sound strategy.[1][2]

I'd love it if I'm wrong and there is a way of understanding what it is meant to do and successfully predicting behavior.

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82914 [2] https://gcc.gnu.org/ml/gcc/2014-06/msg00308.html


GCC isn't the only one with aligned() bugs: https://bugs.llvm.org/show_bug.cgi?id=23868

Over two years and no forward progress! Admittedly, I haven't re-tested this with a newer compiler. But apparently neither has anybody else.


Sorry for responding again, but I was really curious about that "inline" struct type attribute format so I figured I would do some tests. It looks like it actually doesn't do anything! And the warnings are definitely wrong in at least one case. To get straight to the point, some code:

    #include <stdio.h>
    #include <stddef.h>

    struct s {
        char b;
        int k;
    };

    int main()
    {
        printf("offsetof packed k: %zd\n", offsetof(struct __attribute__((packed)) s, k));
        printf("offsetof regular k: %zd\n", offsetof(struct s, k));

        printf("alignof inline: %zd\n", __alignof__(struct __attribute__((aligned(32))) s));
        printf("alignof before: %zd\n", __alignof__(__attribute__((aligned(32))) struct s));
        printf("alignof regular: %zd\n", __alignof__(struct s));

        return 0;
    }
The outputs of this program (Just the numbers) are 4, 4, 4, 32, 4. If the 'inline' attributes were actually respected, the results would be 2, 4, 32, 32, 4. Meaning, the attributes in test's 1 and 3 are completely ignored but do not produce warnings. Also interesting, test 4 actually produces a warning about the `aligned` attribute being ignored even though it actually respects it (as it should). I think this is a just weird issue edge-case with the `alignof` operator specific to `gcc`, since `alignof` takes a 'type' as input, but it's definitely a bug. If you declare a variable with that type there is no warning.

Besides the warnings this doesn't surprise me too much, AFAIK it is not legal to put anything between the `struct` and the structure name in usages of a `struct` type. But from the response gotten over at the `gcc` Bugzilla I would wager the 'inline' versions are in the uncharted waters of "Nobody really knows what that should do anymore, don't do that". Reading the `gcc` bug report again, it seems the last response does actually indicate the error I'm pointing out above, where the attribute is silently ignored, I missed that when I skimmed it the first time.

With that said, the fix EMACS implemented was the same as just removing the attribute completely. If the EMACS code really does need that alignment forced for one of its architectures, it should probably take a look at that fix again and apply the `aligned` attribute on the `struct thread_state` for the architecture(s) that need the 8-byte alignment (If they actually exist), and then remove the attribute completely from the variable declaration.


My understanding (and usage) of `__attribute__((aligned(N)))` has been that it applies differently when used in the declaration of a `struct` type vs. the declaration of a variable (And applies the same whether or not the variable is of type `struct`). Thus my understanding is:

When declaring variables, `__attribute__((aligned(N)))` indicates you want the variable in whatever format it was defined in aligned to that value. If it is less then the default alignment, it will be allowed to be aligned to that instead. For `struct`s, this would mean that the `struct` is kept in the exact same format (with the same padding, in particular) as the declaration would require, but the entire entity itself is instead aligned to whatever alignment you give.

When declaring a `struct` type, however, the individual members all still have their own alignment constraints, which is where `struct` padding comes in. So when you just apply `__attribute__((aligned(N)))` to a `struct` type declaration, it's not clear how you want to achieve that alignment. If that alignment is a multiple of the required alignment for the `struct` then there is no problem, because you implicitly meet the alignment requirements already.

If it is less, however, then it is not clear when you want alignment requirements met and when you do not (IE. Where should the compiler insert the padding bytes in the `struct`? Should it unalign the first member and then correctly align all the rest? Or pack it as though everything is correctly aligned, and then allow the entire thing and all the members to be unaligned?).

There is already an attribute to solve this problem, `__attribute__((packed))`. All `packed` does is force the alignment of all the members in a`struct` to be 1, which both tells the compiler no padding bytes are necessary, and also means that `__attribute__((aligned(N)))` always works, because every `N` will always be some multiple of 1.

So with that in mind, in my opinion the bug largely on EMACS' side (Assuming I'm reading those messages and code correctly). In my opinion, they shouldn't be using the `aligned` attribute on the variable declaration at all, and instead they should just ensure the `struct thread_state` type itself already has the proper alignment. The fact that they are using `__attribute__((aligned(8)))` seems to indicate to me that somewhere align the line a `struct thread_state` type was defined with the incorrect alignment, and it was fixed by adding the alignment to the variable declaration. But if you just apply the attribute to the declaration of the `struct thread_state` type instead, then everything would be fine, and architectures with a `struct thread_state` with a higher-alignment (like 32, in this example) wouldn't be affected by the alignment requirements of architectures with a lower alignment.

That said, there is something I found very interesting in those posts, though I don't believe it is a `gcc` bug. But it is this syntax:

    struct __attribute__((aligned(N))) s var;
I didn't actually know you could even put the attribute in-between the `struct` and the structure name. That said, it does make some sense when you realize what it is doing, even though the syntax is kinda screwy - it is for applying `struct` attributes after-the-fact. So while this does work for `aligned(N)` (which has variable and `struct` variants) it would also probably work for `packed`. This would likely make most sense to use in a `typedef`, which would allow you to declare the 'normal' version of a `struct`, and then `typedef` a `packed` version or `aligned` version as well. I've never really had a need to do that, but it is interesting to see. It does technically fix the problem here, but it is the same as attaching an `aligned` attribute to the `struct` type declaration after-the-fact, and I really do think if possible they should be looking at putting it on the declaration of `struct thread_state` instead.

All that said, I wouldn't consider myself an expert, so if you think I'm wrong please let me know. That said I believe it all fits into what the documentation says about this attribute. I hope the above wasn't too confusing, I attempted to make it clear when I was talking about a "variable declaration" vs. a "struct type declaration", but I'm not sure I was always successful.


I was a user of a very early version of Zortech C++ (one of the first native C++ compilers (late '80s); no running through cfront) and ran into a very subtle bug in the math routines: sometimes when doing math with 8 bit values the answers would be corrupted. Took a while for me to track down the bug but I eventually found it in the assembly output. Turns out that the math was actually being done with 16 bit registers and the high byte (AH) was not being cleared and this would sometimes set a carry (I think, my memory is fuzzy on the details).

Walter was quick with a fix. Sometimes it is the compiler, but usually it's not.


I was using a cross compiler for the 68000 around the same time period, and we ran into many compiler bugs. They were always quick to fix them once we could produce a small example program to demonstrate the problem, but it was a bit frustrating. Thankfully it has been many years since I ran into a true compiler bug.


I work in embedded software, usually with compilers co-evolving with the hardware.

In 10 years, I've seen 3 compiler bugs. That's just enough to leave the door cracked open for "y'know, it could be a compiler bug..."


I used to work on a compiler--and it is sometimes a compiler error. I have to admit, though, that it was a pleasure to reply to a bug report with "If you close this comment here, you will get the result you expected."


The article actually speaks reason.

As somebody who learned C in a way that over time made me aware of the whole "compiler bug" vs. "the standard said so" thing, I'm still not over the nightmare that was "Admitting Defeat On K&R in LCTHW"... Beneath is the link, be warned that this read is probably not for the light hearted.

https://zedshaw.com/2015/01/04/admitting-defeat-on-kr-in-lct...


I've run into tons of compiler errors, especially compilers generated by Yocto project. I've also had weird behavior from compilers when running in a VM.

For example: https://stackoverflow.com/questions/45424272/gnu-gcc-bug-whe...


We've found compiler errors frequently. An absolute hot-spot for compiler errors for us was (relatively) obscure platforms, so we found SDKs for embedded MIPS processors that broke basic gcc intrinsics. No-one had ever noticed that really basic intrinsics (at the level of popcount/count leading zeros) were wrong.

C++ was another hotspot of brokenness for early gcc 4.x.


Sometime it is a compiler bug and sometimes the linker even can correct it.

We had this weird behavior some years ago in an embedded functional safety-related project where a case tool generated the code for the compiler.

When the generated code looked like this:

  i++;
  i++;
  i++;
  i++;
  i++;
  if (i > something) {
  } else {
  }
Then the compiler was miscalculating the jump by 2 bytes in the else path.

But if the generated code was this (spot the difference):

  i++;
  i++;
  i++;
  i++;
  if (i > something) {
  } else {
  }
The compiler correctly calculated the jumps. If certain optimization option was enabled, the linker afterwards corrected the miscalculated jump.

The compiler vendor could reproduce the behavior aka bug, but could not say with confidence the circumstaalnce what the cause for this bug was. As it was for a function safety-related product we where required to identify all jumps caused by if clauses and had to review the assembly, if the jumps in the assembly were correct.


Recall that TimSort had an undiagnosed bug for over a decade:

http://envisage-project.eu/proving-android-java-and-python-s...


I remember using g++ on Windows (via cygwin or something) and writing a function that was supposed to return a Boolean, but sometimes didn't return any value at all. Real spaghetti, in university, no less. Throwaway code, in my defense. Turns out the code not only escaped detection, but the resulting binary always returned the same Boolean value when the code path where no return should be made were run.

Don't think I've seen one since, though I have found a nasty footgun in Scala where if you do a fold of a parallel collection, you can give the fold a starting value, which will be supplied to all the executing threads. The result therefore depends on how many cores you have running. Sadly I suspect that one's in spec.


I work as a frontend engineer and used to think similarly. I don't think I encountered any browser bugs for the first few years, as I was mostly just wiring stuff up and wasn't taking advantage of newer APIs. But eventually I started trying out newer APIs, and I would occasionally bump into weird and unexpected behavior. I was always convinced it was my own fault, but after countless hours trying to debug why something was behaving oddly, I'd look up a few keywords and usually found bug reports in each browser's issue tracker.

Browsers are incredibly complex pieces of software which are constantly evolving. Go look at any browser's issue tracker and you'll find a huge list of bugs.


“The Pragmatic Programmer” refers to this idea as “select() is broken.”


it's a small typo but changes the meaning completely, it's - Select _"isn't"_ broken


You are right that the other way is more prominent, but both phrases appear in the book; for example,

“We now use the phrase "select is broken" as a gentle reminder whenever one of us starts blaming the system for a fault that is likely to be our own.”

That quote appears right above «Tip 26: “select” isn’t broken.»


Well, a couple years ago I encountered compiler bugs with Zortech C, after it was purchased by Symantec. Zortech C was a great compiler, but that quality fell down in the version released version from Symantec. The code didn't change but the compiler did. Some code no longer compiled by the new compiler and other code would crash at runtime. This was simple code, but there was something about how the compiler handled private structs or public classes that caused these issues.


How did that bug pass testing? Isn't the first test of correctness that you generate the 3.5 million permutations of 1..10 and sort every single one correctly?


When I first started using Jenkins Workflow, I ran into tons of these with their Groovy interpreter, even for basic stuff such as Map#each.

Of course, that was pre-1.0 software..


It depends a bit on the age of the compiler. I hit a strange bug in Micropython back when it was still pretty new, that would lead to 0 != 0:

https://github.com/micropython/micropython/issues/610

In the end it was a buggy implementation of the & operator on long integers, which was fixed very quickly.


In userland, if some fraction of the user base misuses a feature, that could be considered a bug, even if it is working as designed.

Obviously things like forward and backward compatibility are much more important in the programming context, so you can't just "fix" things like weird side effects or not throwing an error on assignment in an if statement.


In over a quarter century of programming, I've twice found interpreter errors. Once a function performed the inverse of the documented behaviour (returning "true" for "false" and vice versa) in a proprietary system, the second was a gawk bug of some description I don't recall but proved to be an actual bug.

The other bugs were all mine.


That first one reminded me of the gripe at the top of perl's Configure script https://perl5.git.perl.org/perl.git/blob/HEAD:/Configure#l36 -- "SCO csh still thinks true is false. Write to SCO today and tell them that next year Configure ought to "rm /bin/csh" unless they fix their blasted shell. :-)"

which only gets printed if the shell it's running in egregiously mishandles &&...


I think someone did a rm -f on SCO, so the problem is solved.


The court case is, of late October 2017, still alive, believe it or not.


I’ve been writing c++ for about 9 years now, and I’ve hit plenty of errors in the compiler, some even with perfectly valid code.

I still always blame my own code first though


I came across a bug in the JDK/JRE 1.0.1 relating to GridBagLayouts. Sometimes they would just ignore the directives the programmer specified and lay things out any damn place they liked. I only reported it after checking and rechecking and rechecking that I had use the GridBagLayout API exactly according to the documentation.

It was fixed in Java 1.0.3.


GridBagLayout was the worst back then. It almost got banned in my office and we had several views with gross manual implementations of it because whatever we wanted to do triggered insanity and writing the few constraints see wanted to work procedurally was way easier


Way back in the day HP had just transitioned its HP-UX C++ compiler from its older, Cfront-based, C++ compiler to a native solution. I found (in released code, no less!) some fairly trivial template compositions (I forget exactly, but it was something like a map of some object to a vector of something else) that crashed the compiler.


Depends on the context. Most people's only exposure has been to widely used or open source based compilers. I remember working with proprietary cross compilers for micro-controllers in the 90's. In that environment 'compiler errors' where not that uncommon at all.


Depends on the language. Never encountered a C++ or C compiler bug, but I've hit 2 separate & nasty Python runtime environment bugs (one that caused a segfault because of some memory trampling from having too many file descriptors open, and one edge-case sorting bug).


You gotta through a lot of those sort of expectations out of the windows when working with js. . . imho


It is silly to kick JavaScript in this example because the bug* is in the library of a custom JavaScript interpreter not a common runtime.

* https://github.com/code-dot-org/JS-Interpreter/pull/23


We should consider separate bugs in the language spec from the implementation. In JS it's insane not to K&R brace as it will try to infer missing semi-colon terminators in your code, silently, to your doom. That's surprising if you have the misfortune. It's a massive bug in the language spec but the implementation of the JS interpreter is 100% correct as it kicks you, hard.

Parent comment makes sense to me as a general comment about getting good at JS. Especially given JS has so many new and exciting libraries that get used in production. Ultimately you must fix your code, JS & libraries aren't likely to change in a timely fashion. And of course what I call a bug in the spec must have been considered a feature at one time, some may still consider it so. That's a question of taste and you can always consider yourself 100% correct on any matter of taste simultaneously with those who disagree. :-)


K&R braces have approximately nothing to do with semicolon insertion...


Compare the values of these function bodies:

    return {
    };
and

    return
    {
    };


Yes, those are different, but K&R brace is a style of where you put the braces in a function body or a block. The braces in your example are creating an object literal.

The K&R brace style is not about putting an object literal on the same line with a return statement or on another line, but about how control structures are formatted.

The reason I said that K&R brace style has nothing to do with semicolon insertion is that it’s possible to write all your control structures with K&R braces and still be bitten by the places in the grammar where line breaks are not allowed, like after a return statement, if you’re unaware of it.

It’s also possible to use Allman braces everywhere and never have that problem with return statements.


That is a better example, thanks. I make no claims to JS guru-dom!


if(foo) <- js can insert semicolon there

if(foo) { <- js cannot insert semicolon there

Edit: add this comment you have provided approximately nothing in support of your assertion, you can and should do better. I note this thread is JS and a whole lot of this kind of thing going on here I don't usually see elsewhere on HN. It's a shame, discussion is useful to learn things I don't already know. "Nyer, your wrong" not so useful to me, you or anyone else.


No, JS will never never insert a semicolon after “if(foo)” no matter what comes after it. Putting the braces on your control structures on the same or the next lines has no interaction whatsoever with the semicolon insertion rules.

I’ve written a ridiculously detailed blog post about automatic semicolon insertion if you want to know all the specifics.

http://inimino.org/~inimino/blog/javascript_semicolons


There's a similar pattern in Go's semicolon insertion, although I think the exact rules differ.


I'm kicking Javascript/Node because the entire stack is amateurish and one constantly runs across unexpected behaviours/build breaks in even fairly high profile libraries.


While true, the use of sort() to sort an array of numbers in JS is ... complicated. It works for the single-digit-number case, but if you can have multi-digit numbers you better remember to pass a comparator function to get numeric sorting, because JS sort() defaults to lexicographic sort. So it will sort 10 before 2 by default.


Although with javascript there have been a lot of different interpreters with a lot of different bugs over the years. I'm thinking mainly of browsers and their implementation bugs, there have been quite a few instances in which what I thought was my own mistake turned out to be a browser bug of some kind.


I fixed a bug (random crash) about eight years ago now caused by the Intel compiler failing to properly copy a function parameter into a register. At some point I just gave up looking at the code and took a peak at the disassembly. There it was.

So yeah, it's sometimes a compiler error.


I have discovered two compiler bugs that got fixed after reporting them. In both cases it was relatively new implementation at the time (TI C55x in CCS and TI MSP430 in IAR). In both cases the compiler generated wrong assembly code. This almost never happens. Almost.


I recently learned of a pretty surprising chrome console interpreter bug that was reported as early as 2010[1], fixed by 2012, and at some point seems to have degraded again.

console.log lazily evaluates complex values at first-expand-time, rather than eagerly at print-time, and there is no built-in facility to automatically expand them at print-time. To see this being weird, try logging a changing array inside of a loop, then expand the first log after the loop completes. It will show the array state at click-time, rather than at the point in time the log statement executed. Subsequent changes to the array will not be reflected.

People have been complaining for years about this, but it seems to be widely accepted as just-the-way-it-is[2-4], with the most common workaround being JSON.stringifying logged objects.

I'm mostly shocked that I only noticed I was being bit by it a little while ago.

1 - (2010, shows 2012 fix) https://bugs.webkit.org/show_bug.cgi?id=35801

2 - (2010, w/ edits to note 2012 fix) https://stackoverflow.com/questions/4057440/is-chromes-javas...

3 - (2014) https://stackoverflow.com/questions/23392111/console-log-asy...

4 - (2016) https://taoalpha.github.io/blog/2016/04/21/tech-lazy-evaluat...


I found out one in Turbo Pascal for MS-DOS.

Pascal functions use assignment to function name as the way to set the return value, as it happens, Turbo Pascal would allow declaring a local variable with the same name and then return garbage as the function's result.


A summer spent testing (and breaking) compilers taught me a lot, including: compilers do have obscure bugs. 99.99999% of program errors are bugs in the code, but the rare compiler bug is maddening to diagnose.


Nice use of the "House of Leaves" technique for heightening dramatic tension in the face of saccades.

  was sorting the numbers

  into

  the

  wrong

  order.



It definitely depends on the maturity (and volume of usage) of the compiler. I'm pretty sure i could write a compiler that was full of bugs ;)


<pedantry>A bug in sort doesn't sort like a compiler error. Sounds like a bug in the standard library.</pedantry>


The perl5 if statement is still broken after >30 years btw. It still is a compiler error, but a subtle one in very rare occasions.


I don't understand the env in which this happened, because cracking open my console, I get

> [5, 4, 5, 1].sort().join('')

> "1455"


They are using a sandboxed javascript interpreter. https://github.com/code-dot-org/JS-Interpreter

See the pull request here https://github.com/code-dot-org/JS-Interpreter/pull/23


Basically, the chance that it is a compiler bug is proportional to the inverse of the number of people using it. You probably didn't find a bug in gcc. It's definitely possible that you found a bug in some bespoke javascript interpreter that few people have ever even heard of.


Eh, I feel like I manage to find something to crash scalac about every month or two.


Compiler errors are a daily issue with Xamarin unfortunately...


re: the fix for the off-by-one error

    // but it should have been: 
    if (changes == 0) break;
While this fixes the problem, it no longer handles the case where count is negative. Yes, in this function it's likely that "this never happens", but it's generally good practice assume as little as possible.

    // still stops on negative values
    if (changes < 1) break;
or if negative values could indicate some kind of serious problem:

    // first trap problems
    if (changes < 0) raise_error();
    if (changes == 0) break;


A negative number of changes is nonsensical, so the programmer doesn’t need to defend against it.


> A negative number of changes is nonsensical

Of course. As I said, I know this is defending as case that shouldn't happen.

> the programmer doesn’t need to defend against it

My point is that this is a dangerous attitude that can create bugs. A less-than comparison should cost the same as testing for quality, so only testing for 0 isn't faster.

Why are negative values nonsensical? Is is because the variable is named changes? The algorithm in the preceding code? Comments in the code near that variable? All of these things can change as code evolves. Variables are repurposed, algorithms change (sometimes radically), and updates to code can desynchronize from comments.


If there are a negative number of changes in bubble sort, why is returning early any more valid than continuing with the loop? Think about how bubble sort should work. There is no correct next step to take if count is negative. A negative count of changes is nonsensical. The only option that I could agree with, if you’re going to bother coding anything in this case, is to terminate execution, because the program is incorrect.


> why is returning early any more valid than continuing with the loop?

You could consider it an extension of the "0" condition. However...

> terminate execution, because the program is incorrect.

...this is probably better than simply terminating the loop.


I ran into a GCC bug years ago. (1999-2000). It would put "static" variables into the "text" section of ELF binaries.

i.e. into the read-only portion of the binary.

The result, of course, was that writing to the variables caused a core dump.

So yes, 99% of the time, stupid programmer errors are stupid programmer errors. But compilers do have bugs.

e.g.

https://github.com/practicalswift/swift-compiler-crashes


Im going to guess that was 2.96

https://www.gnu.org/software/gcc/gcc-2.96.html


> Here's something I wrote in October 2000

I hope Mark Dominus has learned how not to use sarcasm and condescension in his coaching style since then. Toxic environments kill people.


You definitely don’t know Mark. He is a great, friendly person I had the pleasure to meet a couple of years ago. Hi Mark!


Yes, it's always your fault.


Eh, no, never. The article is bullshit.

https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__...

https://bugs.llvm.org/buglist.cgi?bug_status=__open__&no_red...

I fight compiler bugs all the time. Poor optimizations, poor comparisons, bad allocations, incorrect warnings and errors, long compile times due to bugs, don't even get me started on GCC intrinsics and SIMD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: