Hacker News new | past | comments | ask | show | jobs | submit login

> AlphaDev uncovered new sorting algorithms that led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.

As someone that knows a thing or two about sorting... bullshit. No new algorithms were uncovered, and the work here did not lead to the claimed improvements.

They found a sequence of assembly that saves... one MOV. That's it. And it's not even novel, it's simply an unrolled insertion sort on three elements. That their patch for libc++ is 70% faster for small inputs is only due to the library not having an efficient implementation with a *branchless* sorting network beforehand. Those are not novel either, they already exist, made by humans.

> By open sourcing our new sorting algorithms in the main C++ library, millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management. This is the first change to this part of the sorting library in over a decade and the first time an algorithm designed through reinforcement learning has been added to this library. We see this as an important stepping stone for using AI to optimise the world’s code, one algorithm at a time.

I'm happy for the researchers that the reinforcement learning approach worked, and that it gave good code. But the paper and surrounding press release is self-aggrandizing in both its results and impact. That this is the first change to 'this part' of the sorting routine in a decade is also just completely cherry-picked. For example, I would say that my 2014 report and (ignored patch of) the fact that the libc++ sorting routine was QUADRATIC (https://bugs.llvm.org/show_bug.cgi?id=20837) finally being fixed late 2021 https://reviews.llvm.org/D113413 is quite the notable change. If anything it shows that there wasn't a particularly active development schedule on the libc++ sorting routine the past decade.




Yeah, this type of grifting is very proforma for Researchers. I used to stress about it, but realize it just sort of goes with the territory.

It's also worth noting that the paper is blindingly obvious and everyone started doing this a long time ago but didn't want to tip their cards.

And that's the real contribution here - Google is tipping their cards. We now have a rough baseline to compare our results against.


Well. I feel grifted too because being a developer, yet not into the state of the art in algorithms (at that level,) i was inclined to buy their BS


It is likely meant to help secure further research funds.


I feel like your take is overly cynical. The fact that humans can do the same thing by hand is not really the point. The contribution lies in the fact that their method derived this improvement *automatically*, which is where the impact lies. No one cares all that much if a human can make a sorting routine 2% faster, but if a program can do it, it suddenly becomes interesting (since it suggests that a similar approach can be applied to many other routines).


I am not cynical about the research itself, I am critical of claims such as "new sorting algorithm uncovered", "up to 70% faster", or "first change in a decade". The research is good. The achieved results are massively inflated.

What they achieved: automatically generated good code.

What they claim: automatically generated code that is revolutionary and an improvement on the state of the art.

And as another commenter noted, superoptimizers are also already a thing: https://en.wikipedia.org/wiki/Superoptimization

There's also automatic searching being done on faster sorting networks that actually recently produced better than state of the art sorting networks: https://github.com/bertdobbelaere/SorterHunter


While I agree that the claims are hyperbolic, I think you are approaching this paper from the point of view of someone who knows a lot about sorting. Because of this, its normal that the claims of these guys who probably don't know much about it are grating for you.

But, at its core, this is really a RL paper. The objective is to see how far a generic approach can work while understanding as little as possible about the actual domain. After AlphaGo exceeded expectations, the question becomes: "What else can RL do, and can it do anything actually useful?", and this paper seems to suggest that it can optimize code pretty well! I'm really not sure they are self-aggrandizing in terms of impact. The impact of an approach like this could potentially be very large (although I'm not saying that it actually is, I don't know enough).


Sorting is not a niche topic. Anybody who majored in CS (which is a ton of people these days) will read the abstract and most of the paper thinking "Wow, I can't believe they discovered something better than O(N log N)" because that's usually what people mean when they say "better sorting algorithm". What they discovered here is effectively a new compiler optimization. They should present it as such instead of calling it a new sorting algorithm.

But ya, discovering a new compiler optimization automatically is kinda cool.


I see your point. I just went over the abstract again and I totally agree.


I hope people majoring in CS will not think that, as they learned that n log n is the theorically best complexity for a sort algorithm. They will rather think that they found an algorithm with better constants in front of n log n.


true that


70% better than O(N log N) is still O(N log N).


It's 70% better than O(1). The algorithms it found are for sort3, sort4, and sort5, which were poorly optimized in LLVM's libc++.


It may have also laundered those from open-source code that wanted to optimize sort3, sort4, and sort5.


Tbh people who majored in CS are supposed to know that it was proven long ago that better than O(N log N) sorting (in general case) is impossible.


> because that's usually what people mean when they say "better sorting algorithm"

Is it really? I've heard of a few "better sorting algorithms" and it's never meant that in my experience.


When people say "sorting algorithm" they mean something like bubble sort or merge sort.


Anyone who does any basic algorithmic CS stuff would have been exposed to sorting algorithms, their variations, sorting networks and so on.

There are already superoptimizers who use genetic algorithms to find the most optimal code sequence for small easily verifiable tasks. That is also a form of reinforcement learning in a way


When one has a big fuck off hammer, everything becomes a nail. Seems to apply to ML too.


I somehow agree that I'd be far more impressed by something that would find optimal or even just better sorting (or selection) networks for sizes higher than 17 (last time I looked at SOTA).


Please check my edit right as you commented :)


Oh very, very cool thanks a bunch.

Edit: compiling the hunter code Right Away and hopefully in some weeks I'll have better networks. Selection networks are even harder to find optimizers for, hopefully one can hack this new thing to get some.


how automatically generated was the code that wrote the code


i don't read it as cynical. It's fair game to call bullshit on bullshit. If an approach exists and is known the "insert your favorite AI here" does not discover anything.


A thing that is also not novel. People have done search for optimisations for at least the past decade.


Sure, but the whole point is to reduce this kind of search to RL, which is a very general framework. Their paper shows that such a generic approach can solve a very specific problem, and solve it well. But, their paper is about improving RL, not about improving sorting.


sometimes one has to wonder how RL can be both so generic and still be qualification to publich in nature again and again and again ;)


Automatic tuning and optimisation of code is not new.


What's surprising is that anyone would've expected an AI to come up with a brand-new algorithm with better complexity than pre-exsiting human-made solutions. How could it possibly come up with something better when it doesn't even understand how the original authors of qsort/mergesort/etc came up with their own..

Sure, it's great PR for the company, but.. the results just aren't there.


How could AlphaZero possibly play better chess than humans when it doesn’t even understand the history of chess theory?

RL doesn’t stop at human levels


Because the entire history of chess theory is really a set of heuristics to optimize a tree search.


So is computer science.


You make it sound so simple. Why don't we let an AI try to come up with all of CS on its own? I doubt it would/could.


Even if AlphaZero does play better chess, there's absolutely zero it can do in terms of explaining why it played that way. AlphaZero is zero in terms of explainability. Humans have to explain to themselves and to others what they do, this is key in understanding what's happening, in communicating what's happening, in human decision-making, in deciding between what works and what doesn't and how well or how bad it works.

Returning back to the original DeepMind press release, it's misinforming the public about the alleged progress, in fact no fundamental progress was made, DeepMind did not come up with an entirely new sorting algorithm, the improvement was marginal at best.

I maintain my opinion that Alphadev does not understand any of the existing sorting algorithms at all.

Even if AI comes up with a marginal improvement to something, it's incapable of explaining what it has done. Humans (unless they're a politician or a dictator) always have to explain their decisions, how they got there, they have to argue their decisions and their thought-process.


It cannot explain because (1) it is not necessary to become good and (2) it wasn't explicitly trained to explain.

But it's reasonable to imagine a later model trained to explain things. The issue is that some positions might not be explainable, as they require branching too much and a lot of edge cases, so the explanation is not understandable by the human.


It's unreasonable to give up on explanations and deem something "not understandable" when we've been doing this thing for 3000+ years called mathematics, where it's exactly explainability that we seek and the removal of doubt. The only other entities that we know of who can't communicate or explain what they're doing are animals.


Can you explain your tastes? Why you prefer an apple to an orange for instance? Not really.

Can you explain how you had the intuition for a certain idea ? No you can explain why it works but not how the intuition came.


This isn't a question of taste. The topic can't be trivialized to a choice between apples and oranges. I actually reject your entire last message.


My point is that most of our actions are intuitive and cannot be explained. maybe this is similar to system 1 vs system 2.


It's fine if you want to refer to Kahneman's classification [1] of instinctual and thorough thinking. Explainability is a separate topic. Also when the amount of energy and compute used are as high as they are.. the results, the return on investment really isn't that high. Hopefully there are better days ahead.

[1] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow


Every DeepMind press release is like this.


You know people at Google, tell Demis about this.


This is DeepMind's modus operandi. Every press release is just utterly hyperbolic nonsense that doesn't withstand the slightest scrutiny. AlphaGo, AlphaFold, AlphaDev... they've done literally nothing to improve the human condition, and may have actually made things worse. Go players have DECREASED their Elo after playing AlphaGo (or just quit the game altogether). I would be embarrassed to be associated with DeepMind.


This is unbelievably wrong. Deepmind has probably the best academic group in RL. The difference between Deepmind and OpenAI is that Deepmind favors academic endeavours and novelty much more, while completely ignoring any commercialization or products, while OpenAI is the stark opposite in that they almost entirely focus on products first, and typically their academic endeavours are just slight modifications of other works scaled up.

Don't get me wrong, Sutskyver (et al) has done incredibly good work previously, but when it comes to the products, they're much more engineering and marketing polish than scientific endeavours.

Botvinick's work on metaRL for example is an interesting direction that Deepmind has shown that few other companies that are only interested in engineering would venture towards.


That's the thing with Deepmind though. They almost never actually end up advancing things because A) they don't release weights and B) they don't usually develop their ideas into useful tools themselves, forcing others to redo all their work.

So yeah, it's essentially a PoC PR stunt factory. Just look at AlphaZero. They make a huge deal about a suspiciously set up match against Stockfish. Supposedly revolutionising computer chess. But the problem is the computer chess community had to redo all of the work, including all the training to build Leela Chess Zero. Due to lack of Google-sized datacentres the training took years to catch up to the weights in AlphaZero. Same thing with AlphaGo, same thing with transformers.

Now, in AI, usually getting a proof of concept is the easy part. Developing that into an idea that actually works in real world situations is usually the hardest part. I completely reject your idea that somehow the work by OpenAI is less worthy of recognition. I think that's just nonsense.

And surely, Google created Deepmind to actually make them product ideas, not create new competitors, which is what has happened.


> Now, in AI, usually getting a proof of concept is the easy part. Developing that into an idea that actually works in real world situations is usually the hardest part.

I disagree. Of course there is a lot of engineering involved and it's also very important but it's much easier to rebuild things based on published research than develop novel ideas.


I think this is a bit far in the other direction. Deepmind's stuff is often deeply impressive, they just have a tendency to exaggerate on top of that.


+1

There must be a "demonstrate (1) DeepMind #Win per <interval>" requirement somewhere that gets the once-over from the marketing dept. to meet some MBOs.


BTW Elo is not an abbreviation it is a persons name: https://en.wikipedia.org/wiki/Arpad_Elo


Around the time of the AlphaGo challenge and afterwards...

1) you could see increased activity in go clubs and online go servers

2) the analysis of the games published by Deepmind has resulted in interesting "discoveries" (or rediscoveries) and changes to what is considered joseki.

3) many people started analyzing their kifus using AI, to find fluctuations in estimated win rate across moves.

So I disagree entirely


It's a bit surprising how poorly DeepMind has lived up to their hype. But they're an OK lab, maybe a bit overly vain.


It's probably very fun to be at DeepMind, I just don't think I'd want to be a part of the cringey hype machine.


I bet it really sucks, tbh. They did all this over promising and now the only way they can deliver is by grifting like this. That sounds really stressful to me.


ya bet it sucks making $500-800k/year comp getting access to the best hardware and google datasets


I've seen people who get promoted above their band, they are not happy campers.


Oh god, promoted as well, I for one am glad they are bearing the burden for me.


Why not. It’s not like they have deadlines


They still have to publish something in major journals and have presence in major conferences.


a.k.a. - they still care enough to have some semblance of shame. There's the rub.


At least they've produced tangible value unlike black holes of money like the Human Brain project which has delivered close to nothing in multiple decades despite billions of dollars in investment.


I can’t believe this


> Go players have DECREASED their ELO after playing AlphaGo (or just quit the game altogether)

Can you explain this for someone unfamiliar with the game?


I am familiar with the game and I cannot explain it. Go is not typically rated with the Elo system, and the quality of top-level human play has increased since 2016.


I also play some, and yeah that's incorrect. Also, there was recently a bunch of hype about an adversarial strategy that could beat AIs w/o deep readout (i.e. only using the 'policy network' nonlinear ML stuff and not enough actual calculation). Here's a vid of an amateur doing it.

https://www.youtube.com/watch?v=H4DvCj4ySKM

EDIT:

Also, if you want to get into it, Micheal Redmond's Go TV on youtube has an amazing beginner playlist, watch some of that then maybe blacktoplay.com and if you likey play :)


That is correct but in game servers, ranks (e.g.: 1 kyu, 1 dan) are computed via Elo-like means.

https://forums.online-go.com/t/how-does-the-rating-system-wo...


Lee Sedol retired in 2019 following his 2016 defeat by Alpha Go in a 5 round match. At the start of the match most people were confident an AI could never defeat a top human player at Go. By the end of the match, watching (arguable) world champ Sedol suffer lost game after lost game the story had changed dramatically. Sedol fans were championing his single win against the unstoppable AI.

We (hacker news) discussed Lee Sedol's retirement here: [1]

To active go players at the time, Alpha Go and Alpha Zero really were as shocking as the debut of Chat GTP was recently.

[1]: https://news.ycombinator.com/item?id=21649495


This is correct history, but not the point TaupeRanger was trying to make (I believe).

I think their assertion is that the release of AlphaGo has actually made human Go players worse at the game, contrasted with chess where most agree that the introduction of Superhuman chess engines has elevated the (human) state of play.

But I don't think there is actually much evidence for that. I'm sure the introduction of AlphaGo did take the wind out of some players sails, who thought of themselves as superior to our best computers, but for everyone else it seems to have elevated the overall level of play just the same as the chess engines have done.


I don't think that AlphaGo has made players worse. My point is that there's no evidence that anything USEFUL or IMPORTANT has come from a system that has gotten so much hype (and cost ungodly amounts of money). If players aren't getting better (there's no evidence they are) or are quitting the game after playing, it's simply a net negative, along with DeepMind's other ventures.


Sorry, but this is just incorrect. Go players have gotten stronger over time overall [0][1], and AI discovered many new ideas that all top pros have incorporated into their game-play (idk how to give a source for this, it's just very well known in the Go community that the style of play changed drastically in response to AlphaGo, and absolutely everyone trains with AI these days).

[0]: "The sudden overall increase in agreement in 2016 also reinforces the belief that the introduction of powerful AI opponents has boosted the skills of professional players." https://ai.facebook.com/blog/open-sourcing-new-elf-opengo-bo...

[1]: https://www.goratings.org/en/


It's not incorrect. The fact that Go players have gotten stronger over 300 years of recorded data does not in any way show that AlphaGo has made players better. The fact that players are suddenly memorizing AI moves in 2016 and beyond also does not mean they're getting better. This system does not measure how good the players are. It measures how much they copy AI moves (which is rather convenient, since the article is written by AI researchers). The phrase you quoted is so hilariously worded that I initially thought it might be satire. Indeed it does "reinforce the belief" that AI has boosted the skills of players - apparently the researchers themselves are not immune to this "reinforcement of belief"!


> My point is that there's no evidence that anything USEFUL or IMPORTANT has come from a system that has gotten so much hype

Geez. We are talking about pebbles on a wooden plank. They are not even colourful!

Go is super cool game, but it is that. Just a game. We are not talking about curing cancer, or solving world hunger, or reversing climate change here. So by the very formulation a Go playing AI can be cool, or interesting, or promising. But could it really be useful/important with all-caps? It sounds like you have too high expectations here.


IT TAKES TIME to solve HUGE CHALLANGES.


The story with Lee Sedol gets even sadder when you look at his rankings chart: https://www.goratings.org/en/players/5.html He immediately lost his heart for the game even though he officially kept playing another three years.


I thought I knew the story. But I'd never seen this graph. Thank you for breaking my heart all over again, jart.


Maybe they just lost so their ELO went down? Or do loses against AI not count?


Either way, in ELO it depends on the ELO of your opponent, and so a loss against an AI with an accurate (very high) ELO is not going to lose you much unless you're one of the best players in the world (heck, in chess, Magnus Carlsen's ELO would still go down by literally nothing from losing against stockfish).


Massive improvements in protein folding do nothing to improve the human condition? What?


Name one improvement. Just one thing that has ACTUALLY helped real life humans and been a net positive since AlphaFold's inception 5 years ago.


I am a neuroscientist, not affiliated with Deepmind. I can´t speak for the other AlphaThings,but AlphaFold dramatically changed the way the biomedicine field deals with protein structures, shortening the gap between hypothesis and experiments by months if not years. You have no idea of what you're talking about.


Name one real human being that has benefited from this "shortened gap", aside from the CVs and H-Index of the researchers themselves?


We're talking about biomedical science here. Things move slowly because the domain is exceptionally complex and human lives are in the balance.

AlphaFold catapulted protein structure prediction forward, and it's hard to overstate how important understanding protein structure is in modern drug development

As an example of how this will be used to help actual people, here's a paper that uses AlphaFold to identify the parts of cancer-associated proteins that interact with each other.

https://onlinelibrary.wiley.com/doi/full/10.1002/pro.4479

The obvious next step is to develop drugs that disrupt these interactions and thereby disrupt cancer. But, it's going to take years, maybe decades before any drug resulting from this research is in actual patients.

There are dozens of other papers like this.


And yet, even after 5 years there's no sign of any real, meaningful drugs, even just Phase I trials. In 5 - 10 years (which will be 10 to 15 years after AlphaFold was released) I am willing to bet real money that there will be zero drugs discovered by AlphaFold that meet the following criteria:

1) The drug couldn't have been discovered without AlphaFold 2) It has been proven to reduce all cause mortality (the thing real patients actually care about) in a randomized controlled clinical trial BETTER than the prior standard of care (or significantly more cheaply, or with significantly reduced side effects)


Send them your resume or ideas about how to do it faster and better, it could help !


There is no way to do it faster or better with these techniques. It is a waste of money - that's the entire point. My advice would be: stop wasting time, money, and human brainpower. Go off and try entirely new approaches to AI that might actually work!


This is such a bizarre take. AlphaFold is faster and better - but it still takes years to develop anything in the life sciences.

It's like pointing to special relativity in 1905 and saying it'll never be useful for anything.


Exactly !


I think it's fair to say that it (where "it" is defined as "DeepMind's contribution to the protein folding problem") hasn't yet given us massive improvements to the human condition.

It might, and in fact I think it probably will. But it hasn't yet.


When the biggest criticism of deep mind is it hasn't literally saved the world yet, i think that is pretty telling about how impressive it really is.


That isn't the criticism at all. The criticism is that it hasn't done ANYTHING, and has probably been a net negative since human brainpower and energy costs are being spent on (so far) useless technology for 5 years. It's not that it hasn't saved the world, it's that it's worse than useless.


To be fair I have read that while impressive, still has little practical application?


I thought the same thing - it smacks of desperation at the moment, any tiny win is exaggerated .

It’s not hard to see why, with the emergence (ha) of OpenAI, Midjourney and all of this generative modelling, what has DeepMind done? I imagine the execs at Google are asking them some very probing questions on their mediocre performance over the last 5 years.


Deepmind has done quite an enormous amount actually, but it's been in academia not in the commercial product sphere. Just because something is not on a little web page available to average Joe's does not mean there isn't value in it. For example, Deepmind's work towards estimating quantum properties of materials via density functional theory may not be the best toy for your grandma to play around with, but it certainly does move academia way further ahead of where it once was.


I run atomictessellator.com and have been working on many different implementations of density functional theory for the last 10 years, as well as working closely with professors at Stanford university and Oxford university of using advanced, non-static geometry data structure for density functional theory for multiple years, this is a subject I know a LOT about, so I’m glad you brought it up.

Deep minds work on Density functional theory was complete rubbish, and everyone in computational chemistry knows it. They simply modelled static geometry and overfit their data, we wanted this methodology to work, computing DFT is expensive, and we did multiple months of rigorous work and the reality of the situation is that a bunch of machine learning engineers with a glancing amount of chemistry knowledge, made approximations that were way too naive, and announced it as a huge success in their typical fashion.

What they then count on is people not having enough knowledge of DFT / Quantum property prediction to query their work and make claims like “it certainly does move academia way further ahead” - which is total rubbish. In what way? Why aren’t these models being used in ab initio simulators now? The answer to that is simple: they are not revolutionary, in fact they are not even useful.


Love that you know enough to call him out haha


Right! Finally, thanks!


Yes, people often mention the number scientific citations that mention AlphaFold as "proof" of its value. Unfortunately, padding researcher CVs is not a net positive for humanity, and so "moving academia further ahead" (by what metric?) is not necessarily a desirable or worthwhile goal if your definition of "ahead" is sufficiently warped as such. Perhaps, one day, the first real human being will be helped by medicine that couldn't have been found/created without AlphaFold. Unless a scientific endeavor is actually useful to humanity, who cares if it "moves ahead"?


How so? Legitimately interested.


They solved an open challenge problem, Protein Structure Prediction, with AlphaFold, which has been nothing short of revolutionary in the structural biology and biochemistry fields. I do scientific research in these fields and the capabilities AlphaFold provides are used now everywhere.


Yes, many research papers have been written, and many CVs have added lines which include the word "AlphaFold". But has the human condition been improved one iota from the "discovery"? Has anything real actually happened? Not at all. Only "maybes" and "possibilities" after more than 5 years of work. "Revolutionary" at padding researcher CVs indeed.


Man, with al respect why your “hate” with the good guys working at DeepMind? Everybody loves and respect Demis Hassabis, he is truly a genius. He really wants the best for the world/humanity and that takes a ton of time, so let’s wait and see.


Curious what this research has actually improved in a practical sense? I’m asking for a friend…


"may have actually made things worse. Go players..."

Go players are using AI to get better at Go.


That's like wearing a baseball glove on your dominate hand and taking it off to throw the ball. It's easier but at some point you need to relearn how to play to make it to the next level


That claim is often made, and never substantiated.


It's made here in response to a claim that Go players are made worse by practicing with AlphaGo, which is also unsubstantiated.


Wrong. The claim was never made that players got worse, only that their ratings dropped, which is empirically true. After playing AlphaGo, for example, Ke Jie dropped in the rankings and was quickly taken out of 1st place overall. The overall point though, is that AlphaGo produced nothing of value for humans, since there's also no evidence that players have improved since AlphaGo's creation. Factoring in the immense cost and human brainpower wasted on creating a superhuman perfect-information-game-playing program, and it's easily a net negative for humanity.


"there's also no evidence that players have improved since AlphaGo's creation"

Read this for example. The author is Korean pro.

"The upside is that we sometimes see a player who was somewhat past his prime suddenly climb back to the top, having trained with AI more intensely. There are a growing number of young and new pros who demonstrate surprising strength. This change gives hope to all pros who dream to become number one, and also makes competitions more interesting to fans as well." [0]

There are serious downsides too.

Also [1]

[0] https://hajinlee.medium.com/impact-of-go-ai-on-the-professio...

[1] https://www.newscientist.com/article/2364137-humans-have-imp...


> Factoring in the immense cost and human brainpower wasted on creating a superhuman perfect-information-game-playing program, and it's easily a net negative for humanity.

Hear me out - what if we learned something about creating AI by creating a new AI?


Assuming a total lack of evidence on either side, I think your assertion is the counter-intuitive one and therefore has the greater burden of proof. Why would Go be any different than Chess?

A rising tide floats all ships – if the best in the world becomes better, others can look at the best and learn from it. What difference does it make if the best player is an AI or a human? The better moves and strategies are still better moves and strategies.



Not AlphaGo, but there are newer neural networks tailored not to crush players, but to teach and explain their playing style, such as Lizzie with Leela Zero.


Just ask any professional Go player.


Anecdotes are not data. Look at the game statistics. There is zero evidence that players are playing at a higher level since the inception of AlphaGo.


Yes, I believe what they're doing already exists in the literature as "supercompilation", though good to see its application under any name.


A decade ago Google had people with breadth and context who could have adjusted the framing of the result of this work (and maybe even re-targeted it yo something useful). Today however Google is a mix of hyper-narrow expert ICs and leaders who lack domain expertise. Paper in Nature? Sure let’s take it!


Shouldn't libc++ be kinda good? Since it's the standard and all? Why isn't/wasn't it?


The LLVM implementation of libc++ is very young compared to other implementations (it was started 5 years ago or so), so there is still a lot of things missing and a lot of things that can be improved.


libc++ was open sourced May 11th 2010. 13 years ago. https://blog.llvm.org/2010/05/new-libc-c-standard-library.ht...


Can anyone refute the claim of orlp that it is unrolled insertion sort


The claim "faster sorting algorithm" is wrong. They have to show the time-complexity. They have to prove that their algorithm is faster than linear sorting algorithms. Otherwise, they have to accept that their claim is wrong.


I agree on the sorting front. Removing one cmov is not likely to improve much.


It's not even a cmov. Look at Figure 3 in the paper: https://www.nature.com/articles/s41586-023-06004-9

They eliminated a register-register mov.


Lol I was about to say that would be incredibly crazy if they found a new sorting algorithm. My time complexity in USACO bout to go crazy.


I get your sentiment but note that discovering a new algorithm doesn't have to imply a better time complexity. Bubble Sort and Insertion Sort have the same time complexity but are different algorithms.


What you're saying is a certain perspective which seeks to look at it from first principles.

To the brutalist, the algorithm change is faster, and thats all that matters. A human didn't previously come up with the optimisation. You might as well say a computer sorting an algorithm is bullshit vs a person because the difference is just a bloated chip does it instead, and thats it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: