Hacker News new | past | comments | ask | show | jobs | submit login
Timsort, the Python sorting algorithm (skerritt.blog)
406 points by alexchamberlain 6 days ago | hide | past | web | favorite | 129 comments





My favourite TimSort story is of ex-Sun employee, Joshua Bloch of Effective Java fame. J Bloch was in audience at the time when Tim Peters presented his new algorithm to sort a list, and he was so blown away that he started porting Tim's implementation right there with an intent to commit it to the JDK mainline [0], which he eventually did [1].

[0] Some of the core JDK developers are really on another level. The JDK code, post 1.5, is a joy to read, though verbose. Compilers and Languages seems to attract a certain caliber of engineers, I think.

[1] https://bugs.openjdk.java.net/browse/JDK-6804124


TimSort—more specifically, Bloch's 9 line "rangecheck" function from his reimplementation—figured into Oracle's claims that Google copied Java code into Android:

http://www.groklaw.net/articlebasic.php?story=20120419221941...

As best as I can remember, those 9 lines are the only literal copying Oracle was able to find; the rest of their suit centers around whether the design of an API (e.g. parameter names) can be copyrighted. (And unless the Supreme Court steps in, the answer is "maybe.")


Then, many years later, input was found that made the Java version crash:

https://link.springer.com/chapter/10.1007/978-3-319-21690-4_...


I found a similar problem using a fairly battle-tested C# implementation of timsort. Felt lucky that I had managed to reproduce it in a dev environment instead of it being a mysterious crash for users.

Many years ago, I copied an implementation of the Cohen-Sutherland algorithm (for line clipping) to C++ from pseudo code (I don't remember where I got this pseudo code from, but other sources used equivalent pseudo code).

The code (it was a backend server code for a web app) was heavily tested with ab [0] and JMeter [1] against extreme load, everything seemed to worked fine. Fast-forward 6 months, when we had a sudden peak in users (for a few days, we went from around 500 visitors a day to around 500.000, which roughly meant > 5 million requests per day). Suddenly, the backend, which ran without problems for half a year, crashed in production every 5 hours or so with a segmentation fault. I could not for the life of me reproduce this. After some panicking, I let the backend run in gdb in production against the ~50 requests per second we were still getting. After a few hours, the segfault occurred again, and I figured out that on extremely rare edge cases, the pseudo code I copied to C++ divided by 0, which lead to chain of problems afterwards, eventually resulting in said segfault. If I remember correctly, the fix was trivial.

[0] https://httpd.apache.org/docs/2.4/programs/ab.html

[1] https://jmeter.apache.org/


It seems like a standard library sort-function is the ideal candidate for investing in proving correctness.

And then 3 years later we have [0] and 6 years after that we have [1]. I appreciate Masters of the Universe types like Bloch and Lea contributing wickedly-efficient code, but somehow it's always mere mortals who end up mopping things up after the fact. Whether it's a bug in the actual algorithm or a "Comparison method violates its general contract!" exception that happens once in a blue moon, I think putting TimSort in Java was a mistake.

[0] https://dertompson.com/2012/11/23/sort-algorithm-changes-in-...

[1] https://bugs.openjdk.java.net/browse/JDK-8203864


> I appreciate Masters of the Universe types like Bloch and Lea contributing wickedly-efficient code, but somehow it's always mere mortals who end up mopping things up after the fact

I think this is the wrong way to look at the problem. Progress is always iterative. If a given person happens to make a bigger iteration, then people call them 'masters of the universe', but that iteration isn't inherently different from the normal, smaller kind of iteration. And - consider: if every line of code has a chance of being buggy, then a bigger change is likelier to be more buggy. I say more buggy, rather than have more bugs, because bugginess is rarely disconnected; the whole conception is likely to be more flawed the newer it is. If we were to only accept small contributions then, somewhere along the way from here to the limit, we would arrive at timsort. And all the flaws of timsort wouldn't be gone, they would just be amortized into smaller chunks along the way from here to there. Which may be preferable, but that's something you have to consider; it's not so cut-and-dry as 'if we only make change conservatively, then we avoid catastrophic failure'.

All this not to mention that there are people who are 'masters of the universe' at fixing bugs.


> I think this is the wrong way to look at the problem. Progress is always iterative. If a given person happens to make a bigger iteration, then people call them 'masters of the universe', but that iteration isn't inherently different from the normal, smaller kind of iteration.

+1. Bloch himself made the iteration over work by Arthur van Hoff and others at Sun before him.


The "Comparison method violates its general contract!" exception is due to a bug in the caller's code, not in the sort implementation. If you have a bug in your code, just because you're able to get away with it today doesn't mean you should expect to get away with it forever.

I agree in theory. But in practice that specific caller's code bug was so widespread and Timsort broke so much existing code that it warranted offering a "-Djava.util.Arrays.useLegacyMergeSort=true" option.

The specific line from the Javadoc you're alluding to [0] is:

> The implementor must ensure sgn(x.compareTo(y)) == -sgn(y.compareTo(x)) for all x and y.

The problem is the second implied half of that statement: Pre-Timsort it was ", and if you don't, results might not be sorted in the correct order" while with Timsort it was ", and if you don't, you will get a RuntimeException."

It's way too easy to accidentally violate that condition: System.currentTimeMillis(), incorrect null handling, random numbers. Sometimes not even in the Comparator itself. The condition I posted, plus the other two:

> The implementor must also ensure that the relation is transitive: (x.compareTo(y)>0 && y.compareTo(z)>0) implies x.compareTo(z)>0.

> Finally, the implementor must ensure that x.compareTo(y)==0 implies that sgn(x.compareTo(z)) == sgn(y.compareTo(z)), for all z.

Are just way too easy for someone to inadvertently violate to justify a RuntimeException when it occurs.

[0] https://docs.oracle.com/javase/7/docs/api/java/lang/Comparab...


It sounds like it didn't break code, it just revealed that a lot of code was already broken. An invalid comparison function is a serious problem, and accepting unsorted output rather than fixing the damn bug is not a good workaround.

Obligatory xkcd: https://xkcd.com/1172 (Emacs Workflow)

It seems way better to fail with a RuntimeException than to get possibly unsorted results that cause some hideous error deeper in your application.

It depends on the contract.

It's good to have a contract that defines behaviour even in the case of invalid input.

That could be throwing an error, or it could be given unsorted input. Either way is ok.

(The C and C++ answer is to leave the behaviour in the face of invalid input undefined. Undefined means that anything goes, including formatting your hard disk.)


If you're willing to accept unsorted output from your sort function, why are you sorting? What are you sorting?

If you supply an invalid comparison function to your sort function, which would you prefer to happen - that it crash, or that it give you unsorted output? (If you actually wanted sorted output, you should have provided a valid comparison function.)

No question, that it crash. Silent buggy behavior is a bitch to notice.

Ideally I'd like a compile-time error — but that's beyond the current level of technology, so the next best thing is a crash. Having code that does unintended things is the worst-case scenario IMO. (Obviously it's the programmer's fault for providing buggy code, but that's a fault that every programmer shares. Personally, I appreciate any help in guarding against it.)

> beyond the current level of technology

I’m sure there quite a few languages that will catch that at compile time.


It is possible for a compiler to catch that particular mistake at compile-time. It's also possible for the user to provide a proof to a given compiler that a comparison function is well-formed. But I think that the parent was referring to a function that would check arbitrary comparison functions in java for correctness, without a proof, which is afaik provable impossible.

> but that's beyond the current level of technology

Is it? In Rust and other languages you would use trait bounds to restrict the input types to be comparable with each other, which would make this a compile time error.


How does that prevent you from accidentally making a.compare(b) and b.compare(a) inconsistent on some values?

Making sure you can compare the types is already happening in the compiler, and doesn't find the bug.


Crash, so that I actually get a reasonably high chance to notice that there is a bug during testing?

The comparison being invalid is definitely a bug; if I didn't want sorted output, I wouldn't be sorting.


Crash so when wil421 sort is implemented in the next version I’m not scratching my head months or years later.

Personally, I would very much prefer that it crash.

maybe you shouldn't supply invalid comparisons in the first place. but if you do, crashing seems like an appropriate response

Most of the time I want it to crash in development and give me unsorted output in production; debugging an unhandled exception is super easy compared to debugging sometimes incorrectly sorted output. In rare cases I would also want it to crash in production. I definitely don't want it to change between the two scenarios when I update my JVM.

crash so i know what a fucking stupid mistake I made and thus I can fix it :)

You can beat O(n log n). That limit is for sorts that use only a ">" comparison. A distribution sort, where you distribute the keys over buckets, can approach O(n).

The first software patent, for SyncSort, is for a sort that beats O(n log n). The basic idea is to read records for a while, get some stats about the key distribution, and set up the buckets to get a roughly equal fraction of the observed keyspace. If blocks of records show up with very different stats, action has to be taken to adjust the bucketing.


Shameless plug:

A couple years ago I wrote an article about where this O(n log n) bound comes from if anyone is interested:

"Comparison Sorting Algorithms and Mystery of NlogN complexity"

https://medium.com/@kamyarg/comparison-sorting-algorithms-an...


It's not some obscure fact. At least it was taught in my bachelor intro to algorithms class, along with the Stirling formula as part of the proof.

Python sort works on generic objects that probide ">=", not more specific types. For the generic algorithm, as I'm sure you're aware, O(n log n) is optimal.

And yes, radix sorts are something like O(n log k), where k is the bitwidth of the maximal key. Since k is often a constant this could be thought of as O(n). A Python sort could enumerate the list, check for all (1) integer contents and (2) no overridden comparator method, and then use a radix sort, I guess.


nit: runtime for radix sort is O(nk), since you sort the full list by each of the digits, sequentially.

I've always thought claiming radix sort on integers to be linear was a bit disingenuous, since k << log(n) only if your list has a lot of duplicate entries.

In fact, if your integers are all unique, then k >= log(n).

(for consistent choice of base for log)


> (for consistent choice of base for log)

It's not clear the choice of log should be consistent. For radix sort the number of iterations is the log of the max element in the base of the number of buckets. For a merge sort the number of comparisons is pretty close to log-base-two of the number of elements to sort. (Not exactly -- to merge n items you need n-1 comparisons, not n.)

Of course, asymptotically they're the same -- lg(n) == log(n) etc -- and we're counting comparisons on one two-fingered hand and divs+mods on the other, so non-asymptotic comparisons don't make much sense without also talking about benchmarks.


You're totally right. joshuamorton made the same correction on another comment where I made the same error. Mea culpa!

Counting sorts are super neat and O(n), if the number of unique items is really small.

Imagine a list of all 0s and 1s that you wanted to sort (like 001110101). Why bother sorting? Just count up how many 0’s and 1’s there are (four 0’s, five 1’s) and generate the sorted list. O(n).

Clearly this doesn’t work when you have real world data to sort, but it is the basis for a radix sort, where you sort a batch of numbers digit-by-digit, giving you O(n*d). Though I think it’s rarely used.


We did actually get to use it at work, to real performance benefit.

We have a leaderboard with a list of users, which is normally a list of users totalling between 2 to 50,0000, but in rare circumstances can be several million. The scores users can achieve are in a rather small range, perhaps only 450k valid different scores, and we were doing this sort on average once every 30 seconds.

What we ended up doing was switching to a radix sort when the list of users got to be very long, as it was substantially faster in those cases, making the user experience much better, as the leaderboard was much more up to date for the very large cases.


> The basic idea is to read records for a while, get some stats about the key distribution, and set up the buckets to get a roughly equal fraction of the observed keyspace

Hadoop Terasort does the same thing of sort, which is to start off with a partitioner doing a radix sort & then sort each part with a lower O number of comparisons (the Spark one actually does local sorts and then has the merge-sort part just fetch ranges after first pass), however those two work great when the key ranges are almost entirely unique with no big runs of identical keys and differing values.

The Timsort one however has an advantage when there are huge runs of identical values or almost sorted inputs, like when you have a time-series with the occasional out of order packet. The galloping mode in the algorithm neatly eats those ranges up very fast.

My favourite example to show sorting is data dependent is the tricolor sort example[1], which illustrates what happens when you have a large number of identical keys.

[1] - https://en.wikipedia.org/wiki/Dutch_national_flag_problem#Th...


Big O notation is about the theoretical upper bound of an algorithm. Clever data-based tricks like putting items in buckets or gathering statistics are unrelated to this notation as they rely on a certain consistency. Maybe you should call it average run time on typical data.

Radix sort is worst case O(n). (which is what I'm assuming the parent commenter is referring to) It's not n in common situations, but nlogn in pathological cases the way Timsort is.

The reason is it "violates" the n logn lower bound of comparison sorts is because it isn't a comparison sort. Sort of like how hash table lookups "violate" the O(log n) average lower bound of binary tree lookups. It has different performance bounds because it's a different problem.


No, radix sort is worst case O(n * k). In many common cases, k ~= log(n). IN certain specific cases, k < log(n), and specifically for cases where you have a very large n, but a bounded number of values (say, you're sorting 10 billion 4-bit ints), k can be considered a constant. But that is by no means generally true.

In most cases k << n. For 64-bit integers, byte wise radix sort k is 8, which is less than log n whenever n is more than 256. So, radix sort is typically much faster than an O(n log n) sort of your data support it. It just isn’t as widely used because it is not as general as a comparison based sort.

That logic isn't right. You can implement an O(n log n) sort with whatever log base/radix you want. It could be 2, it could be 256, it could be 4096. If you use the same base/radix for both n and k, then log n is close to k and probably smaller.

Which version ends up faster is mostly based on factors that big O notation doesn't capture easily. You have to push things to impractical numbers like n=2^1000 or 2^1000000 to properly distinguish between constant factors and log factors, and at that point it doesn't actually help you write better code.


After a byte-wise radix sort you still have to sort within each bucket.

In the worst case, every element falls into a single bucket, at which point your best best is to do a bit wise radix sort over the low 8 bits.

This ends up being equivalent to k=log(n).


Jumping in this fascinating thread about sorting to ask what the double less than chevrons mean? I know what one means but what do two of them mean?

In this context, it means "much less." It's used that way sometimes in mathematical discussions.

Basically “much less than”.


The n log n bound is for comparison sorts, which Radix Sort is not an instance of.

GP is talking about radix sort, basically. If you know all your keys are integers you can sort in O(n log k) (k is maximal key width in bits).

You have a bit of a repetition here backwards. You can sort in O(n * k), where k is the maximal key width in bits. This ends up being O(n log(k)). Your formulation suggests radix sort running in O(n log(log(k))), which isn't quite true.

Sorry, my mistake -- you're totally right. Number of bits is log_2(k), where k is just the maximal key number.

WikiSort should even be faster still

Original author: https://github.com/BonzaiThePenguin/WikiSort

Graph and copyage (by me): https://tse.gratis/aArray/#details

Or grail sort: https://github.com/Mrrl/GrailSort/blob/master/README.md

Sort, is a deep rabbit hole


WikiSort previous discussion https://news.ycombinator.com/item?id=7404223

Can't find a direct comparison with TimSort though..


I heard of it from a talk about a bug in the implementation of TimSort in several popular libraries [1]. That bug should be fixed in Java, Android and Python.

If you're using another language, you might want to verify that the bug is fixed/not present in your library

[1] http://www.envisage-project.eu/proving-android-java-and-pyth...


Here are the Python 3 docs for sorting [1], in-place list.sort() [2], and sorted() [3] (which makes a sorted copy of the references). And the Timsort Wikipedia page [4].

[1] https://docs.python.org/3/howto/sorting.html#sort-stability-...

[2] https://docs.python.org/3/library/stdtypes.html#list.sort

[3] https://docs.python.org/3/library/functions.html#sorted

[4] https://en.wikipedia.org/wiki/Timsort


Besides Python and Java, it's also the sorting algorithm used by Chrome, Android, and Swift. At this point I think more than half the world's programmers are using Timsort, whether they realize it or not.

And Rust, the rust std lib uses a modified timsort/mergesort

FWIW, Rust uses Pattern-defeating quicksort for it's unstable sort.

[1] https://github.com/orlp/pdqsort [2] https://doc.rust-lang.org/std/vec/struct.Vec.html#method.sor...


Technically Swift uses a "modified" version of Timsort that "performs straight merges instead of adopting timsort's galloping strategy" [0].

[0] https://github.com/apple/swift/pull/19717


Note that Timsort uses O(n) extra space: sometimes this can be undesirable.

Note that this is true for any (edit: stable, nlogn) merge sort.

Interestingly, the Golang standard library has a sort in it that claims to be a stable in-place O(N log N) mergesort.

Not true in general, however IIRC it is the case if you want optimal (i.e. n log n) time complexity.

Not even in that case. Heapsort guarantees worst case n log n, and so does quicksort if you use an O(n) median selecting algorithm.

Stable sorts often do require that, (mergesort is usually stabble, heapsort and quicksort are inherently not), but even that's not required - there is a completely in-place variant of merge sort that only requires O(log n) space for stack (like quicksort; heapsort is O(1)). See e.g. https://xinok.wordpress.com/2014/08/17/in-place-merge-sort-d...


Apparently there is a O(n log n) time, O(1) space, stable sort: https://en.wikipedia.org/wiki/Block_sort

you can do merge sort in place

You can’t. Merge sort copies data back and forth between two spaces the size of the data set. That’s O(n) extra space.

I spent some time four summers ago with merge sort. I had a PoC for an in-place algorithm that survived several rounds of poorly selected sample data. That was quite a disappointment.

In looking around I believe I ran across several implementations that required sqrt(n) extra space and one that I think claimed log n but was so complicated I never did figure out why it was supposed to work. At least one of these had higher time complexity, but often enough you need to work with data sets that dominate your memory footprint. Even a second array of pointers might push you over.


Algorithms for stable, in-place merging in linear time, hence merge sorting in O(n log n) time, have been known since 1977 (https://doi.org/10.1137/0206025). This first algorithm was too complicated with too large of a constant factor to be practical, but has since been improved.

It concerns me that the most recent citation in that bibliography says “ We achieve our goal using Recursive Partitioning combined with In Place merging to sort a given array”

Stack frames are external storage. I’d have to see the code to see how they manage to do recursion without log(n) external storage.

Traditional merge sort can be written using iteration, which makes the external storage for sort state O(1), but the semi spaces are still there.


The paper states that each merge operation uses a constant number of pointers. There’s no reason you couldn’t do an iterative merge sort with this in-place merge operation, just like you would with the standard merge operation, to sort with a constant number of pointers.

To be fair, tail calls don't require stack frames.

If it's just a matter of keeping up with the bounds of the partitions, I can see that being done in constant space.


This isn’t a tail call situation, though. In order to convert it to iteration you have to use external storage.

How many people do you suppose can read that link?

Google scholar returns a pdf[0] when searching the doi.

[0] https://epubs.siam.org/doi/pdf/10.1137/0206025


i couldn't find a direct reference, but i remembered that sedgewick's c++ algorithms book had an in-place iterative mergesort with no auxiliary space. that probably was a false memory, but it seems that there is such a beast:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22....


I remember this article now. As someone else said, for certain time complexities there are more algorithms available.

I believe when I realized that fixing my bug would turn it essentially into this algorithm, I found something else to do.


I just tried to read this, and not only is it horrifyingly complex it isn't stable, so it doesn't really fit the bill (even if you allow for strange complexities)

Aha! Thank you. That solves a mystery for me. Skimming it a few minutes ago, I thought it claimed to be stable, and I couldn't figure out why it didn't come to mind when I thought about merge sort.

If it's not stable, then what's the point? It's not a merge sort variant by the most important measure, IMO, and as I said, I was only considering merge sort variants.

Even with all of the additional logic people have created to avoid worst case performance, quicksort is simpler than this algorithm by a huge margin.


Heapsort is a lovely, simple, O(NlogN) sort that sorts in place (so that it requires no extra space). (Explicitly stating something that is implied by an existing response).

Edit: Whoops, apparently heapsort is not "stable" (not sure what that means actually), sorry.


A "stable" sorting algorithm preserves the relative order of elements with "equal value". This doesn't really apply if you are only sorting simple values, as there is no difference between a 7 and another 7, but does if you are sorted more complicated objects by some key.

Example: sorting 2#a, 1#c, 2#b by only the first number.

An algorithm that produces 1#c, 2#b, 2#a is a correct sorting algorithm, but not stable as it changes the order of 2#a and 2#b.


Thanks for the explanation, interesting.

Stable sort means that the order of same elements is preserved. For example, sorting strings by length, strings of the same length would retain the same order between themselves as they had before sorting.

Thanks for the explanation, makes sense. I was tempted to classify this as "irrelevant to me", but then I remembered an application I had recently where I had values (chess games) that I was sorting which I didn't want to reorder if they were adjacent (as far as the sort was concerned) but not identical. Ended up adding an "original order" field as a sort tiebreaker. As it happens a stable sort would have saved me the pain.

Said another way, if I sort my database by user names, the John Smith who has been my customer the longest always comes first in the line of John Smiths.

One of the things you can do with a stable sort is to fake complex sorting by iteratively sorting by each criteria. Though I have rarely seen that be practical anywhere other than tabular data.


I think most Python devs have heard of this, as well as avid followers of HN, of course.

Here is a video I made a while back to better visualize the runs of timsort:

https://www.youtube.com/watch?v=ZxLxf5xqqyE

Might be of interest...


Rust uses timsort as the stable sorting algorithm in its standard library.

https://doc.rust-lang.org/std/primitive.slice.html#method.so...


For an audio representation of TimSort: https://www.youtube.com/watch?v=xoR-1KwQh2k&t=274s

I watched probably ten minutes of that video. Don't know that I learned anything, but it was sort interesting.

Never heard of since it was last discussed on HN: https://news.ycombinator.com/item?id=17436591


Tim Peters is also the IEEE-754 guru for the Python codebase.

Truly a giant of the Python community.

<1/2 wink-ly yr's>


There are cases where pattern defeating sort beats timsort

https://github.com/EmuraDaisuke/SortingAlgorithm.HayateShiki


"external area" tho

As an (unrelated to the Algorithm) Python developer, also named Tim, I've never been able to hear the last of it. Still a great algorithm, though!

I always thought this sort visualizer was cool http://sorting.at/

Isn't this the same idea as the Introsort[1], which was created in 1997 (four years earlier) and is the default sorting algorithm in C++'s standard library (and .NET framework since 4.5)?

[1] https://en.wikipedia.org/wiki/Introsort


Heard about it by reading Java stacktraces one day.

That's why I encode all my data in a format optimized for galloping mode. Just kidding. That'd be sweet though

I also love pdqsort. It has many of the same advantages, and has the advantage of being easier to implement (particularly if you already have a quicksort lying around).

https://github.com/orlp/pdqsort

Basically, timsort is to mergesort as pdqsort is to quicksort.


other sorting algorithm, taking advantage of SIMD instructions and being cache-aware, chipsort: https://github.com/nlw0/ChipSort.jl

I dont understand how it is so special. Every single realworld sort I have seen are hybrid sorts. Very often merge sort + insertion sort or bubble sort for small sub-array.

Can you englight me ?


It's an awesome idea, but seems to cross the line into "exploiting specific properties of the input", even if just a little, which would impact it's usefulness in say, a general purpose library.

Thanks for editing what was originally an inane title "Timsort the sort you haven't heard of" to describe something I (and probably most here) have heard of many times over and over again.

How to be THIS good as a Software Engineer?

Practice, challenge yourself constantly, seek mentorship, humbly request feedback and act on it, and devote your life to the craft, not your wallet or your family. I think. Read Hamming.


Easier said than done

If it was easy, folks wouldn't have to ask how it could be done and everyone would that THAT good. ;)

radixsort is better still

Only for numbers.

Tim the enchanter!

"never heard of"?? I would hope all Python devs at some point Google "What algorithm is Python's built-in sort function?"...

https://docs.python.org/2/howto/sorting.html#sort-stability-...

https://stackoverflow.com/questions/10948920/what-algorithm-...


Great, yet another True-Scotsman of being a proper developer. There is so much you should have read, googled, written to be a "True" dev these days. My theory is - if you are often learning stuff and producing good working code be happy. Not every developer needs to know the underlying sort algorithms.

The person never said that you had to google it to be a "proper developer", just that they hoped that every python developer would have done so.

I hate to disappoint that hope but I have never search for Python's sorting algorithm.

What about the hash strategy used in dicts? I'm holding on to hope.

Understanding where a system breaks is a pretty critical part of "producing good working code," at least for definitions of "good" beyond "closes the bite-sized, cog-in-a-machine story assigned to me in this sprint." I'd hope that any developer making decisions of note in a nontrivial system was at least dimly aware of their language's sort complexity.

How often are you sorting something where N is great enough for it to matter how the sorting works? I spend most of my time choosing data structures such that sorting anything other than handful of elements is unnecessary.

His point is a great many people are familiar with this algorithm by way of working with Python.

You can certainly be happy not knowing the tools of your trade very intimately, but that's definitely not something to encourage.


I took the tone to be more incredulous with the "??" and "I would hope".

I, for one, cannot wait that quantum computing to be the norm, like current one is. Then the only algorithm everyone will use will be randomsort. Got a list to sort it? Allocate one q-bit for each element and apply randomsort and boom, done in under a picosecond, regardless of list size. This is the ultimate algorithm to be implemented for parallelization, all others will take more since they depend on sequence input.

That’s not how it works . Grovers algorithm takes O(sqrt(n)) using a quantum computer .

Values in a quantum computer are superpositions but when measured they will return only one result. They don’t have an infinite amount of time and or space .


We are today on quantum computing where we were in 18th century when Ada was creating the 1st computer program. Sure, Grover's algorithm is good for current state of quantum computing but when it will reach to be a current norm just like the Silicon based one is today then is entire state altogether. By that time Grover's algorithm will take it's place in history but will not be used in practice. From wiki: Perform the following "Grover iteration" r(N) times. Iteration? That means one state of the computation is waiting for a previous one to be completed. That, to me, sounds like not a true parallelization algorithm, hence randomsort still wins.

I suggest you read up on how superposition works in quantum computers–they not just computers with an infinite number of cores :/

No, they are not. Also 128 billion is a big enough number that if you'd say to a 18th century scientist who was working with punch cards computers that in the future one with 128 billion punch cards would exists he would've replied something along the lines, just like you, that there are not enough trees on Earth to create 128 billion cards for a single computer, let alone billion of them that they are more common than horses in 18th century. And yet, here we are, with computers that have 128 GB RAM, like the one I use to reply to you. So how about you let me dream big instead, eh?

If you are going to come up with purely conjectural advantages with no basis in known science, why bother attributing them to one thing (like quantum computing) and not any other thing?

Like why are you not arguing that C++24 will make all algorithms O(1)? There is as much reason to believe that as there is to believe that quantum computing will do so.

Or maybe graphene memresistors will form timelike loops and allow computers that give you results before you ask the question? Maybe! Again, no more reason to think that won't happen than some kind of purely conjectural magic from quantum computation.

Maybe the development of green energy such as solar panels will lead to the development of nano-scale self assemblers, allow us to turn the entire moon into ultra efficient computronium powered by sinking the moon's residual heat of formation into deep space, making the asymptotic complexity of most algorithms on many problems largely irrelevant. Maybe! Or maybe the insight might actually come from a school kid that trips over a rock and gets a vision after hitting their head. Better start putting pebbles out in front of schools, because you never know! :)

Isn't getting a sqrt() speedup for everything and an exponential speed up for a few things good enough for you?

If we really understand the idea of quantum computing so poorly that that we've massively underestimated it, isn't it even more likely that our misunderstanding has made us massively overestimate it?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: