But this is pseudocode. For all you know, it could be implemented in a language whose integers are arbitrary precision, in which case it is perfectly correct and appropriate.
Python, for example, has arbitrary precision integers. That means that it is theoretically possible to represent any whole number in Python, at least assuming your computer has enough memory to support it. Under the hood, the `int` object can have several different implementations depending on how large the number is. So small numbers will be represented one way, and larger numbers might be implemented as 64-bit integers, but very large numbers are implemented as an array of other integers that can grow arbitrarily large. You can think of the array as being like base-10 representation (so 17,537 might be represented as [1, 7, 5, 3, 7]), although in practice much larger bases are used to make the calculations quicker.
Obviously maths with the smaller representations will be quicker than with this array representation, so the interpreter does some work to try and use smaller representations where possible. But if you tried to, say, add two 64-bit signed ints together, and the result would overflow, then the interpreter will transparently convert the integers into the array representation for you, so that the overflow doesn't happen.
So the first poster said that the default merge sort implementation on Wikipedia was buggy, because it doesn't protect against overflows (assuming that the implementation used fixed-sized integers). The second poster pointed out that if the implementation used these arbitrary precision integers, then there is no chance of overflow, and the code will always work as expected.
You can look up "bigint" which seems to be the term of art for implementations of arbitrary precision integers in most languages. You can also read a bit about how they're implement in Python here: https://tenthousandmeters.com/blog/python-behind-the-scenes-...
> Python, for example, has arbitrary precision integers.
In the spirit on nitpicking on edge cases:
It does, but quiet often you pass a number to some C library (other than stdlib) and C does not honour this arrangement.
This means that the instead of fixed width integer types that have a finite maximum due to being 32-bit or 64-bit etc…, the language could use an integer type that can grow to be as many bytes as is needed to store the number. This is called a BigInt in JavaScript for instance.
Python3 doesn't have a maximum integer and therefore cannot experience overflow when adding two integers, for example. You can keep adding one forever.
I'm aware (plus the fact that the algorithm is correct in Python). It's very unlikely that this is an argument I can win.
I'm taking a pragmatic perspective: like it or not, people are going to skim the article and copy & paste the pseudocode.
Given that the pseudocode is buggy in the vast majority of programming languages and the user isn't informed about this in the pseudocode, it's going to lead to unnecessary bugs.
https://en.wikipedia.org/wiki/Binary_search_algorithm#Proced...