As an auction theorist who's biased towards putting things in auctions and as an operator who's been through the venture investment process several times, I'm biased in wanting a more efficient capital formation process with less adverse selection (better outcomes) for both sides. So, I'm doubly biased, but I appreciate why this hasn't been done and that it could be done.
Classical auctions aren't a good fit because they can't handle non-price factors. However, a combinatorial clock auction could handle non-price factors while also improving price discovery. But as Milgrom says (paraphrased), the best auction is the one people use, so the more realistic outcome is what OP describes: running your own (implicit) auction.
Well I do have you to thank for teaching me about combinatorial auctions :).
That said, you can think of the auction in two parts - the first gets the relevant people from the starting line to the actual bidding. The second part is where you actually make decisions. The latter part in this dynamic usually has few enough participants that a founder can run the variables in his/her head.
What would be interesting to see is whether or not more funds would get to the bidding stage with a robust combinatorial auction at the beginning of the whole thing.
Your mention of Futamura Projections was a nice reminder of how very mathy/theoretical foundations underpin Nice Things in high-level languages, like Hindley–Milner inspired type systems and modern optimizing compilers targeting SSA form. Value Lattices in Cue [1], another config programmable configuration language, also fall into this bucket.
Maelstrom [1], a workbench for learning distributed systems from the creator of Jepsen, includes a simple (model-checked) implementation of Raft and an excellent tutorial on implementing it.
Raft is a simple algorithm, but as others have noted, the original paper includes many correctness details often brushed over in toy implementations. Furthermore, the fallibility of real-world hardware (handling memory/disk corruption and grey failures), the requirements of real-world systems with tight latency SLAs, and a need for things like flexible quorum/dynamic cluster membership make implementing it for production a long and daunting task. The commit history of etcd and hashicorp/raft, likely the two most battle-tested open source implementations of raft that still surface correctness bugs on the regular tell you all you need to know.
The tigerbeetle team talks in detail about the real-world aspects of distributed systems on imperfect hardware/non-abstracted system models, and why they chose viewstamp replication, which predates Paxos but looks more like Raft.
> [Viewstamped replication] predates Paxos but looks more like Raft.
Heidi Howard and Richard Mortier’s paper[1] on the topic of Paxos vs Raft has (multi-decree) Paxos and Raft written out in a way that makes it clear that they are very, very close. I’m very far from knowing what consequences (if any) this has for the implementation concerns you state, but the paper is lovely and I wanted to plug it. (There was also a presentation[2], but IMO the text works better when you want to refer back and forth.)
The view-stamped replication paper was surprisingly readable - I'd never looked at consensus algorithms before in my life and I found I could kind of follow it after a couple of reads.
This thread [1] covering the 2020 Prize in Economic Sciences, awarded to Paul Milgrom and Bob Wilson for their improvement of auction theory, is a good read.
Vickery Auctions are one of many in the auction format bestiary, and like all classical formats, they predate both computers and modern auction theory. What's really cool these days (and I'm admittedly biased since it's my space) sits at the intersection of computer science and economics. Economic mechanisms and auction formats that use everything from SAT solving and combinatorial optimization to machine learning and function approximation to drive better real-world economic outcomes.
Joshua Bloch did a great post [1] on how nearly all standard library binary searches and mergesorts were broken (as of 2006) due to this exact issue. The punchline is that the bugs started cropping up when the need and capability to sort O(2^32) element arrays arose.
Imandra [1] has an excellent literate programming walkthrough that hits on the same issues and plugs the holes one by one using formal methods, ultimately proving merge sort correct.
> Congrats on the launch! What are the main benefits of this approach compared to creating additional combo (multi-leg) products on existing exchanges?
Thanks! There are two main differences. For one, combos are (as the name suggests) predefined. That works reasonably well for products like futures and options where the 80/20 approach of making combos for somewhat structural ones like different expiries in the crude and eurodollar complex or packs and bundles designed as standalone financial instruments/hedges. An implied generational liquidity mechanism can knock out some basic structural price arbs between combos, resulting in a combinatorial auction approximation.
This approach falls apart when the combinations are very general as they are in the markets for equities, credit, and many of the assets that trade in the screens.
The CLOB/predefined bundle approach also doesn't address substitutability and non-price factors, and dealing with those is key to unlocking Pareto efficiencies.
> There are already a lot of mechanisms in traditional markets that deal with revealing or concealing true demand (e.g. block trades, icebergs, etc)
The problem with block trading venues and other approaches, e.g., conditionals, boils down to incentives. Initiators of block trades are usually going in the same direction, so opportunities for direct interaction/coincidence of wants are rare. And market makers don't want to take large deltas unless they can hedge and/or know the counterparty. The net effect is not much size getting done. Conditionals are a similar story to blocks. They don't have the opportunity cost that a firm block resting on a venue does, but there's information leakage, and the surface area for interaction is still small. Market makers aren't incentivized to provide liquidity, and directional traders are worried about/behave strategically due to concerns over information leakage.
> what's to stop exchanges from (1) creating more common bundles that people want to trade
I'd say that the market has already done this in the form of ETFs and index products and an entire ecosystem of ETF market making emerged around it.
> (2) matching them with price-time priority so everyone gets a fair price? Wouldn't the auction model just create wider or locked/crossed markets?
I'm not sure that I follow this part entirely. The uniform price combinatorial auction that we're running results in everyone getting the same price on a symbol-by-symbol basis. And, we view time priority as a bad thing (the arms race dynamic of time priority was known to practitioners since markets first started going electronic but Budish et al. were the first to write about it in detail). Periodic auctions have better fairness and post-trade mark outs theoretically and in practice. Some of the European venues where batch auctions have made limited inroads demonstrated this.
Agree with the combinations being more ephemeral outside of futures and options markets. I do wonder how much liquidity will exist for these combos, since even some of the popular structural ones don't have a ton of volume.
Regarding the last point, let's say hypothetically you create a market for "+100 FB shares, -500 SNAP shares". If everyone is competing on price to quote that combination, that creates the most competitive market. However, if there are many expressive bids with various conditions (e.g. minimum quantities, conditional on execution of another leg, etc), they may not get "implied" into creating a reasonable market, creating exponentially more arbitrage opportunities if they become locked/crossed. This adds a lot more complexity in calculating implied markets and matching them in a sensible way. With price-time priority, I agree that there are downsides as you mentioned, but it makes it easier to ensure the tightest spreads.
> let's say hypothetically you create a market for "+100 FB shares, -500 SNAP shares". If everyone is competing on price to quote that combination, that creates the most competitive market
Some combinatorial auctions make this tradeoff, packaging goods either to deal with computational limitations or to concentrate bidding on a few packages. It can work if there's near total consensus on what the economically relevant packages are, but it doesn't work otherwise. US equities is an "otherwise" case given a huge diversity of needs. The chance of someone wanting the opposite side of even a pairs trade at any given point in time is vanishingly rare. It's far more likely that the person doing the pair would interact with two or more counterparties independently actively interested in or willing to for the right price sell FB and buy SNAP (perhaps conditional on hedging). The mechanism design game is more about giving every party the tools they need to communicate their value function to the auctioneer and creating the incentives to bid (close to) truthfully.
> However, if there are many expressive bids with various conditions (e.g. minimum quantities, conditional on execution of another leg, etc), they may not get "implied" into creating a reasonable market, creating exponentially more arbitrage opportunities if they become locked/crossed.
And this is why combinatorial auctions are a global optimization (as opposed to implied, which are effectively an iterative and greedy approximation) and, in our case, one that seeks to find uniform clearing prices. There are formats with price discrimination and others in which mechanical arbitrage within an auction is possible, but ours is not one of them. There isn't a separate price for {A}, {B}, and {A, B} — within each auction there's a uniform price for p_a and p_b, and p_{a,b} = p_a + p_b (and so on for any arbitrary linear combination). Theoretical point: linear prices don't always exist (in practice, they do for interdependent value goods that aren't strongly sub or superadditive, e.g., capital market goods), and they're not inherently desirable. We chose linear pricing largely because it's a natural fit for how capital markets work now, and perceived fairness/simplicity is itself a valid mechanism design consideration.
> Are you expecting to only support institutional investors or retail order flow as well?
We don't segment the market at all or exclude subscribers (beyond requiring that they're FINRA registered BDs), but the way that PFOF works means that we likely won't see retail order flow from the brokers that wholesale it. We do view Smart Markets as a win/win/win/win for retail customers, brokers, market makers, and regulators alike (cleaner routing, better transparency and price formation, better allocative outcomes, lower technology costs).
> How does the expressive bidding interact with NBBO held orders?
Thanks! We found that the "UX" aspect of fitting squarely into existing workflows to disrupt without disrupting is key—as is how we message the product. Both our focus on solving speed and Expressive Bids as code are "UX" decisions aimed at slotting us into existing market structure and mind space.
A surprising (to us) takeaway was that making a product in this space sound "vanilla"/undifferentiated is a good thing. Once folks aren't concerned about an initial integration being a lift, they're happy to onboard us as "just another trading venue" (but with great story about unique liquidity and match quality). Many then get excited about adopting the lowest hanging fruit incrementally for their specific use cases. And after peeling away a few layers of the onion, they get excited about the future state in which others do the same, and what initially seems like incremental change becomes a market structure transformation.
We studied the history of adoption in other markets where it's gone well (FCC spectrum, display advertising, procurement) and poorly (OptiMark, POSIT4—great attempts, ahead of their time, killed by complexity and subtle mismatches between the mechanism and market participant needs).
Classical auctions aren't a good fit because they can't handle non-price factors. However, a combinatorial clock auction could handle non-price factors while also improving price discovery. But as Milgrom says (paraphrased), the best auction is the one people use, so the more realistic outcome is what OP describes: running your own (implicit) auction.