But it hasn't conquered it! I kept searching the article for some new recent breakthrough that I've missed but it's not there. Yes, solvers like Pio have been around for years and limit holdem has been essentially solved for a while but nobody plays limit holdem anyway.
The two most popular games (no-limit Texas holdem and pot-limit Omaha) are still unsolved.
Have to say, Pluribus beating top humans for 10,000 hands isn't the same thing as being super human. It's just too small of a sample to make that claim.
Further, thousands of the hands that Pluribus played against the human pros are available online in an easy to parse format [0]. I've analyzed them. Pluribus has multiple obvious deficiencies in its play that I can describe in detail.
It seems like it's very difficult to set up any kind of proper repeatable and controlled experiment involving something as random as poker. Personally, I would be much more convinced if Pluribus played against real humans online and was highly profitable over a period of several months. This violates the terms of service / rules of many online poker sites, but it seems like the most definitive way to claim terms like "solved" or "superhuman"
Normally 10,000 hands would be too small a sample size but we used variance-reduction techniques to reduce the luck factor. Think things like all-in EV but much more powerful. It's described in the paper.
> Finally, we tested Libratus against top humans. In January 2017, Libratus played against a team of four top HUNL specialist professionals in a 120,000 hand Brains vs. AI challenge match over 20 days. The participants were Jason Les, Dong Kim, Daniel McCauley, and Jimmy Chou. A prize pool of $200,000 was allocated to the four humans in aggregate. Each human was guaranteed $20,000 of that pool. The remaining $120,000 was divided among them based on how much better the human did against Libratus than the worstperforming of the four humans. Libratus decisively defeated the humans by a margin of 147 mbb/hand, with 99.98% statistical significance and a p-value of 0.0002 (if the hands are treated as independent and identically distributed), see Fig. 3 (57). It also beat each of the humans individually.
>The remaining $120,000 was divided among them based on how much better the human did against Libratus than the worstperforming of the four humans.
Surely the correct strategy here is for the human players to collude to give as much money as possible to a single player and then split the money afterwords, no?
Also, the fact that they players can only gain money without losing anything likely changes their play somewhat. By default I'd assume (and have generally observed) that most players on a freeroll (or better than a freeroll really) tend to undervalue their position and gamble more than is usually wise.
I'd definitely be interested in seeing a "real" game where the humans are betting their own money.
The four humans were getting $120,000 between them. Their share of that was dependent on how much better they did than the other humans. That means there was no incentive to collude.
Top pro poker players understand the value of money. They weren't treating it as a freeroll and anyone that has seen the hand histories can confirm that.
Do you think human players could use the results of this paper to learn how to be better poker players? I'm wondering if it could be an alpha go type situation where players learned different strategies.
The journalist who contacted me told me he did so because the software keeps coming up when he talked to pro players. While it's certainly not one to advance the science the most (talk to Noam Brown if you want that), not the fastest (talk to Oskari Tammelin about that), it's still very popular and the first to get big following. It changed the game and got into the online poker culture. I am quite proud of that and I think it's deserved to be mentioned a lot in an article about how computers changed poker.
I know that PioSolver is not a "poker AI" per se, but the article seems to say it can tell you what to do based on the table situation. Has anyone tried pitting pro players against PioSolver?
PioSolver requires putting in the hand range of the opponent, so the quality of PioSolver's solution is largely down to how accurate the guess at that hand range is. But if a pro knows he is playing against PioSolver configured with a certain hand range he can just change his strategy to adapt. In theory though if PioSolver knows the correct hand range then it shouldn't be possible to any better than tie given enough hands.
There is still a lively academic community and major progress! Check out CMU's no limit results [1]. (I realize articles like this have to pick some angles to make it interesting, but it was weird to see only dated research mentioned.)
But if you are rooting against the machines, don't worry: it is almost certainly impossible to calculate a full equilibrium policy for no limit multiplayer, so we will instead be debating over the virtues of various types of imperfection for a long time. And even if an Oracle gave us convenient access to equilibrium strategy, it would still not be the optimum at a table full of imperfect players. Your poker game is safe for a while!
It doesn't even matter if you can calculate multiplayer equilibrium. It's not the solution the same way it is in heads-up. You can still lose if you employ the equilibrium in multiplayer unlike in HU.
That's not true in practice for poker. Pluribus showed that if you run CFR in multiplayer poker you get a solution that works great in practice. Multiple equilibria are certainly a theoretical issue for many games, but poker conveniently isn't one of them.
It's not about multiple equilibria but about (often unintended) collusion.
Examples of that affecting poker games are very well known. One frequently occurring example was discussed in online community 15-20 years ago (BTN raises in a limit Holdem game, SB calls too much which hurts both the SB and the button giving equity to BB).
I don't think you're correct saying it doesn't affect poker as people were able to notice and analyze this before solvers. It's true though that no-limit Holdem as played today (two blinds,no ante,deep stacks) is likely not strongly affected by the phenomena. I don't agree Pluribus experiment shows much when it comes up practical play. Not enough variety of skill levels, not enough hands and not enough time for metagame (people adjusting to how others play) to develop. I do agree pure equilibrium play is most likely not terrible in cash game nlhe but definitely not in poker in general.
No limit holdem has been essentially solved. Pluribus & co not withstanding, you just haven't heard of it because the people who have solved it are busy printing money in online poker (yes, I know they try to detect bots, and no, they can't detect them all). With stakes this high, academic progress lags the 'actual' state-of-the-art by years.
IIRC it's "solved" for heads up but not really multiway like 3+ to the flop. I believe in a recent Bart Hanson Youtube he points out that mutltiway is not solved.
> Machines have raised the stakes once again. A superhuman poker-playing bot called Pluribus has beaten top human professionals at six-player no-limit Texas hold’em poker, the most popular variant of the game. It is the first time that an artificial-intelligence (AI) program has beaten elite human players at a game with more than two players
It's not solved for multiway in the sense that the optimal move in each situation isn't known, but there are AIs like Pluribus that have superhuman performance.
Those constraints don't seem that strong to me. The algorithm could've just been retrained with different stack sizes if it ended up making a big enough difference.
Hasn't Facebook's Pluribus come pretty close to "solving" no-limit holdem? I don't know a lot about poker so I can't really assess their claim's validity.
The two most popular games (no-limit Texas holdem and pot-limit Omaha) are still unsolved.