Hacker News new | past | comments | ask | show | jobs | submit login
The death and life of prediction markets at Google (asteriskmag.com)
268 points by mfro 3 days ago | hide | past | favorite | 172 comments





> Google pioneered many now standard tech practices: on-site cafés, A/B tests, and “dogfooding,” or first releasing new products internally where they can be improved before launching to the public.

Famously, Microsoft and others pioneered dogfooding decades before the events described in this article and approximately a decade (at least) before Google came into existence.

And I’m 99% certain company cafes existed at least a half century before Google invented the concept.


Re dogfooding. I worked at Oracle back in the 90s, first in the team building the company's HR systems (specifically their payroll product), and then in the team building the company's CASE (computer-aided software engineering) products.

The CASE products were intended to automate away coding, and Oracle's HR products, with their gazillions of lines of code, and with the developers located in the same building and with a common senior management team, were the best possible opportunity for dogfooding.

However while I was in the HR product team we never tried in earnest to use CASE (except for the ER modelling tools). There was no real enthusiasm from the CASE team to support us and the tools as they were then fell far short of what was required.

Later, when I joined the CASE team, I learned that their narrative was that the company's ERP products (like HR) were so complex that they were not realistic targets for CASE (* cough cough bullshit * - and perhaps the sort of attitude that doomed the CASE products in the long term).

My learning was that dogfooding is an awesome strategy, but sometimes much harder to embed in a development team than one might think.


> they were not realistic targets for CASE

that's a very interesting tidbit.

It looks to me that the management structure and incentive at this department is too conservative, because failure is seen as bad and is probably punished (somehow - might not be overt).

Therefore, leadership is incentivized to target realistic use cases, which means simple use cases. This basically "guarantees" success as described by the objective.

This is the same as revenue forecasts being overly conservative, and the market sees through the lies.


Yes. Both teams followed a course that was less ambitious and politically safer for themselves, to the detriment of the company overall.

Ironically the _lack_ of dogfooding GCP products at google is often quoted as one of the reasons AWS beat GCP to defining the Cloud market. Amazon builds AWS on AWS as much as possible, Google has only somewhat recently pushed for this

What I understood is that AWS is more than dogfooding. It is something Amazon first built for themselves, to give more independence to individual teams. And as they noticed it worked well, they realized that they could turn it into a product.

For what I understand as an outsider, Google is much more monolithic, having a platform where each team can do their things independently is not really their culture, so if they build one, it is only for their customers, because they don't work like this internally. Whereas for Amazon, an AWS customer is not that different from one of their own teams.


That’s mostly a marketing myth on the AWS side. As recently as three or four years ago there were _new_ initiatives being built in the legacy “corp” fabric; and even today Amazon has internal tooling that makes use of Native AWS quite different than it is for external customers; particularly around authn/authz.

And that doesn’t even mention the comic “Moving to AWS” platform that technically consumed AWS resources, but was a wholly different developer experience to native.


Now building on AWS inside is heavily emphasized, but just a few years ago most services were built with internal systems that are very different. Some solutions (multi account/cellular architecture for example) seemed to come from dog fooding heavily, but supporting services (like account SSO for handling many accounts) are still very different from the publicly available equivalents.

As someone who worked at AWS it’s ironic how hard they dog food cellular architecture but when it comes to customers, all the offerings and docs are terrible, with the only information in obscure Re:Invent talks or blog posts.

I now work for a large customer and you would be shocked at the household names that basically put all their infrastructure in a single Account and Region. Or they have multi region but it’s basically an afterthought and wouldn’t serve any purpose in a disaster.


Catfooding

I can see why they do it, though. There are a bunch of foundational Google infra technologies that are great for building an IaaS on top of, but which can't themselves be offered as IaaS services for whatever reason.

Let's use Google's Colossus (their datacenter-scale virtual filesystem) as an example. Due to the underlying architecture of Colossus, GCP can turn around and give you:

• GCE shared read-only zonal PDs

• near-instantaneous snapshots for GCE and BigTable

• async and guaranteed-durable logging (for GCE and otherwise) and Queues (as Pub/Sub and otherwise)

• zero-migration autoclassed GCS Objects, and no per-operation slowdown on GCS Buckets as bucket size increases

• BigQuery being entirely serverless (vs e.g. Redshift needing to operate on a provisioned-storage model)

But Google can't just sell you "Colossus as a service" — because Colossus doesn't have a "multitenant with usage-cost-based backpressure to disincentivize misuse" architecture; and you can't add that without destroying the per-operation computational-complexity guarantees that make Colossus what it is. Colossus only works in a basically-trusted environment. (A non-trust-requiring version of Colossus would look like Apple's FoundationDB.)

(And yeah, you could in theory have a "little Colossus" unique to your deployment... but that'd be rather useless, since the datacenter scale of Colossus is rather what makes many of its QoS guarantees possible. Though I suppose it could make sense if you could fund entire GCP datacenters for your own use, ala AWS GovCloud.)


I think Gmail was great initially because of dogfooding. Right now, the incentives are different, and it's more about releasing new stuff. And we can see how that worked with the Google Chat saga.

Lots of other Google products suffer from similar issues because of an apparent lack of dogfooding. I bought a Pixel phone not so long ago and I had to install all updates, one by one, to bring it to the latest Android version. It took several days.


Probably more importantly, doesn't the Amazon store system use AWS? Google has nothing comparable to use for that purpose.

There is search, Adsense, gmail, google docs and Gemini. Do they at least train Gemini on GPUs on GCP?

maps is another big one.

one of the craziest comments i've read on HN. google does a lot of internet things these days, idk if you've been out of the loop for a while

I didn't express it well.

Google's consumer-facing systems all tend to be very focused. Things like search, maps, gmail etc. are not the same kind of system as Amazon's store.

While these systems do presumably give Google something to exercise their cloud systems on, the sense I have (as a longtime user of both GCP and AWS) is that it doesn't give them a realistic sense of what other companies, that don't just sell advertising and consumer data via focused products, do. Amazon's store is more representative of typical businesses in that sense.

Basically, it seems to me that Google Cloud has continually learned lessons the hard way about what customers need, rather than getting that information from its own internal usage.



I had a job about twenty years ago where at some point a sort of freelance tea lady started coming round. Basically had a van full of bacon rolls and a tea urn, and would drive up to office buildings and offer to come in and sell tea and rolls to the employees. The boss agreed, because it made his employees happier and cost him nothing. She made a killing. We got bacon rolls at our desks. It was entrepreneurship at its finest. Silicon Valley could never.

A position that still exists at most football clubs, even the ones worth billions

they have the money to spend, and aren't a political football like with the NHS

There are over 100 professional clubs in England and most of them don't have money to spend, but they all have a tea lady.

Definitely. I remember a class trip to Intel's Jones Farm campus back in the early 90s and they definitely had company cafes. Everything was free.

When I worked at Intel, they only free items in the cafe were coffee and fountain drinks. But maybe they were more generous before the dot-com bust.

Only half? I'd be willing to say that there should be some 19th century examples and we can argue if the food arrangements in the Valley of the Kings count.

I worked at a few tech companies before 2000 that had a company cafe but they all required payment. It was cheaper and closer but not free, and not nearly the same level of quality. Charlie's et al. were all free, and I remember even seeing a TV news spot about the free food at Google specifically, in the months before I started there. I think I'd be okay crediting Google with the free lunch (and the nod towards TANSTAAFL that I suspect it was).

But dogfooding, yeah that had been around for a while. Originally from Alpo, iirc. The first tech company to adopt the term as well as the practice was Microsoft in 1988.

A/B existed before Google but 2000s era A/B testing and user research were unparalleled until Facebook also started putting serious capital into it. Nowadays it's considered table stakes but it was revolutionized in the beginning of the millennium. Maybe not entirely by Google but substantially so, and driven heavily by their product launch review process.


A/B testing was Amazon Weblab.

Google pioneered information retrieval and ranking innovations, not behavior optimization.


(Author here) Yeah, good feedback, I think I overstated this in the article.

It would have better to say Google popularized these practices rather than pioneered them. Or at least, that these practices became much more widespread among tech companies after Google's IPO in 2004 than beforehand.

I think it's also safe to say that Google's culture was strikingly different from other tech companies of its era, as has been well documented in a few books.


> It would have better to say Google popularized these practices rather than pioneered them

This also seems incorrect. Before Google, it was common to have company-provided before Google. IBM and Motorola had cafeterias. I don't know when AMD installed their cafeterias, but if it was post-Google, it would've been inspired by IBM and Moto's cafeteria and not Google's. In Austin, the Moto cafeteria was known for having very good food and IBM was moderately subsidized and pretty good until the 2010s, which doesn't line up with Google being influential at all. And Centaur had great, free, food. This is an old idea that predates tech companies that a lot of tech companies have picked up that Google also happened to pick up.

As a term, dogfooding spread through Microsoft after Paul Maritz wrote an emailed titled "Eating our own Dogfood" in 1988. If the term was popularized by anyone, it was probably Joel Spolsky who took the practice from Microsoft and blogged about it when he was the most widely read programming blogger. But there are a lot of examples of people doing this before Martiz's email (they just called it something else) and before tech companies even existed; this is another practice that predates tech companies that tech companies picked up.

I don't know about the history of A/B testing in tech, but Capital One was doing A/B tests at scale before they would've been influenced by Google and that's another idea that was used outside of tech.


IBM had cafeterias, but they were not free, and they served standard "cafeteria food" that you might find at a hospital or school of the era. When I was at IBM in the early 2000's, the vast majority of people either brought lunch from home or went out (despite there being nothing within walking distance -- you had to drive 5 or 10 minutes to the nearest options).

As far as I know, Google was one of the first to offer food that was tasty enough, healthy enough, and cheap enough (free!) that nearly everyone ate at the company cafes on a daily basis.


Which campus? That doesn't match my experience in Austin at all, where most people ate the cafeteria even though alternate options were available with a very short drive, and the food was pretty good. Maybe not as good as Google's food at the time, but probably as good as the food at Google the last time I visited. And the food was decently subsidized (I'd eat breakfast there for $2). In Austin, Moto and Centaur known for having really good food, but IBM's food wasn't bad in the early 2000s. On my team, I think one or two people packed their lunch and everyone else would eat at the cafeteria except on special occasions.

I've heard from people who stayed at IBM that the food declined to cafeteria food quality over the next ten years, which led to the cafeteria basically being abandoned because people ate out so much. But that's actually counter to the narrative in the post — IBM had decent food before Google, and then some time after Google's IPO, the food declined to became standard cafeteria food.


https://en.wikipedia.org/wiki/Charlie_Ayers won a cookoff and made the Googleplex well known for food. Employees brought their kids for dinner. People from other companies angled for invites to lunch. I once realized I was in line behind Vint Cerf (Vice President of Inventing the Internet). Ayers moved on, but I think Building 43 mid-campus still hosts the enormous "Charlie's Café."

For a while, another building was notorious for serving sushi but only admitted their Android developers, because Andy Rubin was paying for that himself.


My experience (not IBM) is that there was not free food that varied a bit and different groups tended to cluster. At a company I worked at for abput 13 years, the engineers and product managers tend to favor the pizzeria which was run by a local pizza shop.

I've never had free food routinely except customer briefing center or some other lunchtime work function. Rarely went out unless it were a short walk. (The brief time I worked in downtown Boston with no cafeteria is pretty much the only time I went out for lunch routinely.)

Per another comment, my sense is that brown bag lunches used to be more common and most people stopped doing that.


> doesn't match my experience in Austin...where most people ate the cafeteria

Some hardcore eaters in Austin.


Kodak had multiple corporate cafeterias with nice food cooked by in house staff. It declined in quality when they switched to a food service company.

It's all about the money. When I worked in the offshore drilling business, we had some better caterers but we were paying a premium for them.

> it was common to have company-provided before Google. IBM and Motorola had cafeterias.

I'm not sure that was popular, though (as in something the majority of the population believed in). Grandma in Poducksville almost certainly had no idea. She would have known about Google doing the same, though, as it was blasted all over the news constantly for a while.


Others are taken. Let's give it a credit for popularizing the A/B test ;-) https://www.google.com/search?q=Google%27s+41+shades+of+blue

You linked to the search results of a very Google specific A/B test, this doesn't explain at all why you want to give Google credit for popularizing A/B tests...

So will you submit a correction to the editor?

Edit: I don’t want to be harsh on you, but the fundamental problem of credibility, especially in online writing, is that it takes one mistake to lose an amount that takes hundreds of correct decision in a row to regain…


Me personally, don't mind people leaving in obvious mistakes and lies in an article, it makes it easier to know I shouldn't take what they write as truth. It's a reminder that they couldn't get the most obvious stuff right, so I probably shouldn't believe them in areas that I know nothing about.

I personally like when the article itself is as correct as possible and then there are footnotes or something listing the corrections that have been made. I like to learn about misconceptions, I find them interesting.

I don't see how this relatively minor issue subtracts from the overall credibility of the article. In fact, the way it is presented in the article with footnotes adds to credibility, in my opinion.

The author is active in his HN discussion (user ddp26)

Food cafeterias at the office go back to Henry Ford. Creating a distinction based on how fancy it is, is just the modern day “I invented it!”.

I did work experience at tidbinbilla radio telescope facility in...1999 or 2000.

It had an on-site cafeteria, which aside from being "out in the middle of no where", we all wondered "why?". Why didn't everyone just bring their lunch?

Even at the time, we just explained it as a "just a thing that Americans did" and wrote it off as because of the presence and involvement of NASA.

So it's good to hear that Google invented it sometime later...

/Never listen to tech people commenting on history or economics, lol


I worked out of the SGI campus before it got sold to Google, and I remember the on-site cafe there was amazing. I don't know how much Google changed about it, I've never been.

As a vendor, I (well, my company) had to pay for meals at SGI, I have no idea if the employees got free meals.


I used to hang out with Jeff Dean and he told me that Google and SGI both occupied the googleplex at the same time and the SGI employees looked sad because they had to pay for their meals.

I believe Charlie's (the main onsite cafe) has been renovated a few times although the basic layout was constant throughout my tenure (2007-2019) and in fact if you looked behind the curtains (literally), there was basically the equivalent of an archeological trash heap with generations of Google and SGI documents.

Over time Charlie's got worse and worse; the food quality dropped significantly and became quite monotonous (true for the other cafes as well), and Noname (eventually named Yoshka's) did too. In fact, every great cafe I remember attending was eventually replaced with a worse version of itself.


Didn't you hear, every silicon valley company personally invented everything they ever did. Chamath invented Data Science at Facebook!

When I worked at Electronic Arts in 2003 we had a lovely restaurant on the premises.

I was hoping the article would reflect on the problems with predictions markets, but it's just a dry history.

Crucially, "predictions markets" do not and cannot exist in any real sense. A pure predictions market would be completely isolated, causality-wise, from the event they are trying to predict. But the two are not and cannot be isolated, except for some degenerate cases like trying to guess the output of a true random number generator (and even then I'm not so sure sufficiently motivated people wouldn't try to game the system anyway). This is why we have problems with our current predictions markets, e.g. the stock market (insider trading, etc.) and sports betting (match-fixing, etc.).

Every prediction with a stake is an incentive to alter the outcome of an event. Once the weight of the stake outweighs the resources being used to ensure the impartiality of the outcome, the wheels fully come off the cart and the prediction stops being about the underlying event and starts self-referentially predicting the impact of the prediction itself. The snake eats its own tail and the market becomes useless. You cannot scale up a predictions market without this eventually coming to pass. See also the famous example of how a predictions market for when public figures will die is just an assassination market with extra steps.


(Author here.) I agree with this critique in theory, but not in practice. The stakes don't need to be high to encourage strong participation. Corporate prediction markets have already scaled quite far, and those who have studied them don't find evidence of manipulation.

Can't let perfect be the enemy of the good!

If you want a fuller critique of prediction markets in the corporate setting, see the Dec 2021 article linked near the end [1].

[1] https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/...


From your own article:

> A senior executive saw Prophit give a very low probability that the company would complete the hire of a new senior executive on time (filling the position had been a quarterly objective for the past six quarters). “The betting on this goal was extremely harsh. I am shocked and outraged by the lack of brown-nosing at this company,” the executive said to laughter in a company-wide meeting. But the market was the nudge the execs needed. They subsequently “made some hard decisions” to complete the hire on time.

Indeed. The whole point of the prediction markets espoused is to alter the decisions being made. That means the prediction itself can have an impact on the outcome intentionally or otherwise.


Yes, this example does illustrate this point. As I acknowledge later in the article:

> This turns out to be a general lesson from running a corporate prediction market. Forecasting internal progress, and acting on that information, requires solving complex operational problems and understanding the moral mazes that managers face. Forecasting competitors’ progress has almost none of these problems.

Forecasts on competitors (or, say, regulators) avoids this problem... unless employees are manipulating the outside world too!


Yes it has fewer problems, but not 0. The social network between competitors can be quite tight because the communities involved are so small. So the predictions can be used as social taunts or challenges. Similarly, I can conspire with my friends working for said competitors to game the system to win the prize if the prize is valuable enough.

Basically, betting markets have all the problems and risks of traditional public markets (insider trading) without any of the regulation or ability to enforce the law.


> Indeed. The whole point of the prediction markets espoused is to alter the decisions being made. That means the prediction itself can have an impact on the outcome intentionally or otherwise.

Which is great when the impact of the prediction market is "people making hard decisions" and not so great when they're studiously slowing down the process because they've got a bet on something not happening on time...


(I ran a large forecasting project for a while)

I think one criticism that the linked post misses but the OP article touches on is that most forecasts (whether they be prediction markets or super-forecasting style) is that they often predict the wrong things.

> We asked questions of the type “Will Google integrate LLMs into Gmail by Spring 2023?” and “How many parameters will the next LaMDA model have?” Yet what executives would have wanted to know was “Will Microsoft integrate LLMs into Outlook by Spring 2023?” and “How many parameters will the next GPT model have?”

It's really hard to build a prediction market for these things and I'm not sure "forecasting" is the right way of thinking about them.

(Following some links led to https://www.lesswrong.com/posts/uGkRcHqatmPkvpGLq/contra-pap... which has some interesting points too)


The stakes don't need to be high to encourage strong participation.

OK but strong participation isn't necessarily a positive for the accuracy of a market. The problem is the prediction market with lots of participants can be just outlet for partisans to put forward their opinions. There's a tax on wrong opinions but someone is spending a bucks, the markets won't be a powerful force for changing those opinions.

What you want for a prediction market is for the major participants to actively researching the problems - expend money and effort to have a well founded reason for their positions. Markets for random real-world events have the problem that many events don't occur often to weed out arbitrary biases and there may not be any easy or cost effect way to attain a well-founded opinion on the subject.


> This is why we have problems with our current predictions markets, e.g. the stock market (insider trading, etc.)

A unit of stock represents a legal claim on a company's assets even in the absence of a market for that stock.

In many situations, we do want people to influence the value of their stock. Any company that grants stock is doing so because they expect employees to work harder. There are many cases in which employees with stock grants might make short-sighted decisions for quick profits, but on the whole stock grants make employees into better workers.

> A senior executive saw Prophit give a very low probability that the company would complete the hire of a new senior executive on time (filling the position had been a quarterly objective for the past six quarters). “The betting on this goal was extremely harsh. I am shocked and outraged by the lack of brown-nosing at this company,” the executive said to laughter in a company-wide meeting. But the market was the nudge the execs needed. They subsequently “made some hard decisions” to complete the hire on time.

In this case, the predictions market would've been useless had the executive team not used it in their decision-making.

A pure predictions market would be completely isolated from the event they're trying to predict. But I would say that's not an ideal predictions market.


The anecdote of the Google executive is a lovely example of why the market failed to work as a predictions market. If you had a stake on Google executives continuing to drag their feet on filling that position based on their past performance, and then they saw that prediction and were so incensed that they stopped dragging their feet, then you lost your stake!! The existence of the market changed the outcome; that's the whole point of the objection here! If you believe the purpose of such a market is to affect the real world in such a way, then sure, that's a reasonable stance... but that's not a market for making predictions, that's a distributed market for bidding on contracts.

Your objection assumes that there is an inherent value to accurate predictions, independent of being able to use those predictions to make decisions.

The problem is that there is a misalignment of incentives. In this scenario, the execs wanted the hire done, so seeing the long odds on that happening motivated them to do the hire, and it got done and they were happy. But from the point of view of the bettors, they just lost their bet! The next time they make a bet, they will remember this and avoid betting on markets like this (ones where execs can read the market and change course) and so those markets will have fewer informed bettors.

The execs will then end up with lightly-traded, inaccurate markets on events they control, but probably still reasonably accurate predictions on events that they can't. That is maybe still useful, but it means you will have to think hard about the nature of each market before offering contracts on it, which may not really be easier than just doing the forecasting some other way.


It's like how there's an inherent value to adding numbers together on a calculator. It "just works". Once you have it, you can rely on it to do the math part while you do more complicated things.

A prediction market that is a dance between observer and market and agent doesn't "just work". You can't rely on its predictions because they are conditional on your choices, it's a complex feedback loop.


Yes, that's what a "prediction market" is.

How much money was in this prediction market? I can’t imagine it was that much. The problem here may have been with a market that’s too small to motivate thinking through the issue beyond a simple base rate, not that its large size made influential betters act to change the outcome in order to profit from the market.

> that's a distributed market for bidding on contracts.

Could $10,000 equivalent of the yearly prediction market awards have actually moved a Google executive-level job search if they were spent directly on that process? That's less than a month's salary for a sourcer, recruiter, and interviewer.


I think internal corporate prediction markets bake in the likelihood of this happening.

I think evidence would make this a stronger argument. Most of the people I've known with modest stock grants ignored them, or at least that's what they said -- kind of "maybe it will be worth something, but probably not". For really large grants, I'm sure it's different, but that is a far smaller number of employees.

stock grants don't solve the principal agent problem at a sufficiently large company. if I have an absolutely incredible year as an IC, I might increase my employers revenue by a few basis points.

I have plenty of internal metrics to demonstrate my contribution, but nothing I do measurably affects the stock price.


Prediction markets (like the trump vs harris bet on Robinhood) can also be used as a hedge.

E.g. if before the election you think that a certain candidate winning would cause the markets to react in a certain direction, you could "bet" on the other candidate so that if your portfolio value goes down, you earn the proceeds from the bet to recoup some of your portfolio losses. Or if the "good for stock market" candidate wins, you loose the money you bet but the gains in your portfolio balances it out.

In that case, you're not really betting on who you think will win. You're just betting as a hedge just in case that person wins.


> You're just betting as a hedge just in case that person wins.

But this itself is a form of market distortion. It calls to question what, precisely, people think the market is supposed to be measuring, both in theory and in practice.


> It calls to question what, precisely, people think the market is supposed to be measuring, both in theory and in practice.

I'm starting to think that the answer is what mhh__ wrote: who cares? Markets aren't there to measure anything. Markets are there to make money for participants. Any measurement that can be attributed to the markets under some conditions is, at best, an incidental side effect.


Every transaction on a market affects that market.

Calling some of those effects "distortions" is a tricky business at best.


In a perfect market, the market maker who sells you that option offsets it with correlated assets in the other direction, eg by buying or selling stock that is sensitive to the election.

Large trading firms exist on finding and exploiting small arbitrages between various correlated assets. If you assume a perfect market with infinitely many participants and infinite liquidity, then this “works” - there is no distortion at scale.


Who cares? Should we abolish wheat futures?

The fact that futures markets are so heavily regulated, precisely because of the market failures described above, should aid in understanding why markets do not "predict", they "determine". Have you ever wondered why trading onion futures has been banned since the 50s? https://en.wikipedia.org/wiki/Onion_Futures_Act

That's so strange. What makes other commodities amenable to having futures, but not onions? Or are they going to ban each thing in turn the first time someone corners the market and causes trouble?

Apparently it's onions and box office returns? What weird corner cases. Why not strawberries too?

Can't the onions futures market be regulated the same way as all the others?

If anything, this makes me think all the rules are arbitrary.


It has more to do with the nature of the product – can it be reasonably stored in bulk for length without eroding in quality. This goes for anything physically settled. Look at the agricultural products traded at the CME and you'll see there aren't any markets for perishable products like strawberries.

But is there a law forbidding strawberry futures? The wiki page mentions only onions and box office returns. How can other perishable futures "ban themselves" while onions need to be banned?

They're not banning themselves. There's no market for them. A market will arise based on the market size, product characteristics, etc. Without anyone willing to make markets and trade strawberries, there's no futures market for them. All that to say, there's no need for a law banning something if there's no willing market for it. There was an onions futures market and that's why the law is specific to onions.

What about orange juice futures?

We saw Trading Places.


Lol frozen concentrate

> What makes other commodities amenable to having futures, but not onions?

Nothing (at least for other perishable foodstuff); law often doesn't even in theory have a broad universal theory behind it, but instead responds narrowly to observed or perceived immediate problems.


> The fact that futures markets are so heavily regulated, precisely because of the market failures described above, should aid in understanding why markets do not "predict", they "determine". Have you ever wondered why trading onion futures has been banned since the 50s?

You're saying the answer to the above question is "because there was an immediate problem with onion futures in the 50s". I don't think that's what they meant. That would be unrelated to "the fact that futures markets are so heavily regulated".

I guess if everyone has a different opinion, and every reply comes from a different person, there's no "discussion" as I understand it.


Exactly. Similarly, companies impacted by weather have access to trade "weather futures".

If it's an extremely dry year, you profit from the weather futures instead of your crops (and vice versa). Buying weather futures isn't necessarily a prediction of what you think the weather will be.


How much you're willing to pay for those futures is the prediction.

But, it is a function of what you believe the future will be (and your risk tolerance).

If you have a higher risk tolerance, you will buy fewer futures. If you believe the next year will be dryer than normal, you will buy more futures than normal. If you believe your crop is likely to be better/more reliable than normal, you will buy fewer futures.


> If you believe the next year will be dryer than normal, you will buy more futures than normal.

The point is that you, the farmer, don't need to take a view on whether the next year will be drier than normal. You just buy $X worth of rainfall futures.

The same way you shouldn't buy more flood insurance if you think the next year will be exceptionally wet. You can't really predict that, after all. You should buy flood insurance roughly up to the value of restoring your house after a flood, and you should hope the insurance market is healthy enough that the cheapest provider of that insurance offers you a price that reflects the expected value of the insurance plus a small markup.


> The point is that you, the farmer, don't need to take a view on whether the next year will be drier than normal. You just buy $X worth of rainfall futures.

And I'll reiterate, this is a function of your risk-aversion/efficiency. One would expect, for example, climate change to increase the price of weather futures as extreme/problematic weather events become more likely. It's often difficult to see the impact of these changes on the scale of a single farmer, but in aggregate lots of farmers do a market make.

> You should buy flood insurance roughly up to the value of restoring your house after a flood, and you should hope the insurance market is healthy enough that the cheapest provider of that insurance offers you a price that reflects the expected value of the insurance plus a small markup.

And the insurance companies have a small army of actuaries who make sure that the prices they provide take into account conditions like the relevant risk factors of where your home is. This is instead of a betting market style concept, where you could instead imagine every individual actuary as a potential insurer.


> The point is that you, the farmer, don't need to take a view on whether the next year will be drier than normal. You just buy $X worth of rainfall futures.

Sure, but if I, a non-farmer market player that couldn't give two fucks what the market is even about, can predict that the next year will be dryer than normal, and to what degree, better than anyone, I can make money buying up however many of these futures I can afford. It works even better if I can actually make the weather more dry somehow.

This, I believe, is called "providing liquidity to the market", but curiously, if I tried that with flood insurance, I'd just be guilty of insurance fraud.


> You just buy $X worth of rainfall futures.

The cost of that varies though. If you have to pay $95 to get a $100 payout that’s a very different calculus from $50 for $100.


>In that case, you're not really betting on who you think will win. You're just betting as a hedge just in case that person wins.

Some people did exactly this back in 2016, and just ended up feeling bad, because they were profiting off a "bad" (in their eyes) event.


With this last election, there were also some big differences across markets, which presents the opportunity for not only hedging, but constructing a win-win set of bets.

Not really. Transaction costs / vig (commonly 5%) and counterparty risk eat up theoretical arbitrage profits.

The main arbitrage opportunity was in finding ways to place illega bets in the bettor's jurisdiction.


I read that some markets had >10% differences and from what I can tell, polymarket transaction costs are 2% of profit. That said, Im not sure that all of the listed markets had open financial access, so it stands that there were real reasons the differences were sustained and people didn't do exactly what I said.

[flagged]


This is an idiots take.

It's a nuanced take, re-parse.

It is literally an "idiot" put, betting on idiocy.


This is the Hacker News community. Let's be constructive and civil. Your comment would have more interesting and relevant if you had explained why this trading strategy is a bad idea instead of just labeling it as "idiotic".

I'll bite, I guess:

* In deferrence to Boglehead philosophy, hedging bets is a fool's errand because the idea is you lose your money the more you touch it. Make a plan, invest, and then hold hold hold staying the course come hell or high water.

* If you truly want to reduce or eliminate risk, the best way is to simply cash out. A $1 bill will always be a $1 bill with absolute certainty.


As a boglehead... that's just not how it works in the real world. Some people would treat the loss of $X worse than the gain of $X is good. Thus, they don't have linear value for money (no one really does... if you lose 90% of your bank account, you still have dinner tonight; if you lose all of it, you might not).

Some people want to come out neutral or lose a guaranteed small amount, rather than the chance to lose or gain the same amount. I'd pay $5 to avoid having to flip a $10k +/- coin. Thus, if I knew I would lose $10k if X got elected, I could place a $10k bet for Y to win.


> Some people would treat the loss of $X worse than the gain of $X is good

Most people. "Loss aversion refers to a cognitive bias in which the same situation is perceived as worse if it is framed as a loss, rather than a gain" [1].

> I'd pay $5 to avoid having to flip a $10k +/- coin

Risk aversion. Seemingly related, but in fact quite rational.

[1] https://en.wikipedia.org/wiki/Loss_aversion


There are bad hedges and good hedges.

It is commonplace to take various financial positions that limit downside. It is one of the primary uses of options and futures.


>I'll bite, I guess:

> In deferrence to Boglehead philosophy, hedging bets is a fool's errand

The flagged comment didn't say that hedging was idiotic, but implied that the specific hedge mentioned was idiotic, yet didn't give any reason for calling it such.

>$1 bill will always be a $1 bill with absolute certainty

This is money illusion.


the world can be idiotic far longer than...

My experience from predicting on Metaculus, and following this space for a few years, is that it's hard to operationalize the thing you want to predict precisely. Regularly, things go completely sideways in ways that the author did not foresee, and the market ends up hinging on some technicality, rather than the spirit of the question it tried to predict.

Some examples:

    - A question about asset prices specifies FTX as resolution source, but then FTX stopped existing.
    - There was a question about wether submarine cables in the Red Sea would be destroyed by a hostile act before a certain date. The cables got damaged, but it seemed to be a (suspicious) accident, with very limited independent media coverage.
    - There is a question about wether YouTube would be blocked in a certain country before 2025. It got throttled, to the point where it is unusable in practice, but not technically blocked.
    - There is a question trying to forecast LLM progress. How to quantify that? It chose "What is the state of the art score on the Penn Treebank at a certain date?", which was a standard benchmark at the time. But as LLMs evolved, new benchmarks got developed, and although current LLMs probably score much better on the Penn Treebank than a few years ago, nobody reports Penn Treebank score any more. The question ended up being about the popularity of the benchmark rather than LLM progress.

You make a very good argument against prediction markets in which the stake has actual material value, but what about markets where the only stake is your reputation? My understanding of the Google platforms is that there were prize rewards for top predictors but largely the rewards were social, which, to me, implies they were not taken particularly seriously. An ideal platform might be one where there is no material incentive but users are instead interested in genuine hypothesizing about future events. Obviously there must be a genuine stake involved, but as we see with Manifold this can be done with 'play money'.

Would be very interested to see further discussion about this.


The two problems I identify with this are 1) it underestimates the distorting effect of social rewards (see also karma farmers and serial confabulators on Reddit) and 2) it overlooks how verifiable reputation can be turned into profit by e.g. account-selling (once again, see Reddit).

Very true. At Google it was treated much like Memegen where top bettors -- like top memers -- were rewarded with clout & notoriety, but it basically stopped there. A lot of the wagers actually were on "inside baseball" topics (will x/y/z be deprecated by a//b/c date, will perf metrics align with targets, etc). There are a few handfuls of folks who are really into it, but the vast majority of use is casual.

> the two are not and cannot be isolated, except for some degenerate cases like trying to guess the output of a true random number generator

You’re describing endogeneity. It’s unlikely prediction markets affect natural disaster odds.


> It’s unlikely prediction markets affect natural disaster odds.

They don't need to. Predictions markets for natural disasters exist, we call them insurance companies, and the concept of moral hazard when it comes to insurance is a well-studied topic: https://en.wikipedia.org/wiki/Moral_hazard


One, insurance happens at insurance companies and in markets. Two, there are prediction markets for natural disasters. Three, moral hazard doesn’t mean insurance makes natural disasters more likely.

The distinction matters both because the precise wording of specific contracts will cause distinctions that were not adequately considered by certain participants, and also because of who enforces the contract at a human level. Who determines what does and does not qualify as a natural disaster? If that entity has a stake in one outcome, or is influenced by an entity with a stake in one outcome, that changes the outcome. Both of these are seen in insurance claims, where homeowners find out that rising rainwater does not actually count as a "flood", and where agents of the insurance company are financially incentivized not to pay out at all if they can help it. These factors do not go away just because you take away the insurance company, you've simply diffused them.

There are some ways to mitigate this. You can make the prize pool big enough that people will be motivated, but small enough that people won't try to sway the outcome. Also, prediction markets can be very useful for making predictions about competitors, because it would be rather difficult to influence what happens at a competing company.

What do "any real sense" and "scale" mean here? I was watching the Betfair odds during the US election, and that market alone had about $500 million dollars of bets pass through it. Is that not real? Is half a billion dollars not sufficiently scaled up?

They mean that once they reach a certain scale they cease to be predictors and start affecting outcomes, not that it is impossible to allow people to bet on things indefinitely.

A "good" prediction market is when it functions as an accountability market. The insider trading problem isn't an issue if the outcome is for a greater good. The problem is that money is the easiest way to ensure anonymous predictors are serious about their predictions and getting richer is not a greater good. If you were to require politicians to participate in a reputational prediction market, the aligned incentives might make it a positive thing.

You could let ordinary people piggy back participate too, and then use the results to filter through internet/media noise but that starts to smell too social score-ish.


I think there are cases where the outcome of the event can be bound to the prediction. Also we should be looking for cases where:

> Every prediction with a stake is an incentive to alter the outcome of an event

...is a feature not a bug.

For instance, imagine if PR's on core infrastructure (like xz-utils, for instance) were first reviewed by a set of trusted maintainers, and--supposing they pass that gate--were later put into a sort of limbo where people can bet on whether they'll be merged. The maintainers then make a policy around betting that the PR will be merged. They're in charge of whether it actually gets merged or not, so most of the time this is a pretty good bet and they just recycle that money by winning and then re-betting it on the next PR.

Of course third parties can also bet in the same way--these would be stakeholders which are not maintainers, but who wish to "sweeten the pot" and encourage people to spend enough time with the pending code that they might find a reason to "bet against the house". No prior coordination between the maintainers and the stakeholders is necessary.

Suppose I notice some malicious code in one of these commits which is in the betting phase. (Maybe I do my own testing on pre-release versions as a stakeholder, or maybe I'm a bug bounty hunter.) I can bet that the PR won't be merged. The monetary value of that bet makes it clear that even though I'm a stranger, I'm not a spammer. That "buys" me the attention of the maintainers, and if I'm right about the code being malicious, the maintainers will decide not to merge the commit: I'll have successfully altered the course of events (bad commit doesn't get merged) such that I get paid for having been right.


How does a prediction market for say, hurricanes or earthquakes effect the outcome of those events?

An event like a pandemic could be not classified as a pandemic by an authority to avoid paying some insurances compensation. It is not an earthquake, but a similar example, I believe.

Prediction markets work via resolution criteria, and resolution is a human action. So the moment you operationalize any question, prediction markets can affect the outcome. To give an extreme example, when employees of the FHA bet on "Florida hurricane with 100+ casualties in 2024", they don't need to invent a weather control device to make that happen.

You could as easily say that journalism is an attempt to influence events. And it is, in a way! I think most journalists would be happy to point to situations where their reporting was read by influential people and maybe even changed outcomes. Pulitzer prizes are awarded for that sort of thing.

The question is whether this is improper influence? Is it too influential, for bad reasons?

I think a more reasonable critique of prediction markets is that it's guesswork laundering. We are given a number, but we don't know why that number was chosen. How can we tell whether it's justified?

When markets move, there is a whole industry of people coming up with explanations of why it might have happened - more guesswork!

A lucky guess can be helpful if it can be verified, but knowledge should be about more than making guesses. Sharing evidence is important.


lack of isolation isnt necessarily a insurmountable problem, as the market can still forecast, even if it is sometimes a self fulfilling prophecy. Similarly, this distortion does not necessarily overshadow the utility - e.g. the stock market may have insider trading, but is still an incredibly useful financial tool.

They key thing to remember is that these markets can have utility beyond zero-sum betting aspect. For example, the stock market isnt just gamblers betting against each other, but is also a tool for auctioning corporate ownership and raising corporate funds.


I've spent some time looking at group decision making for specific situations and there are clearly some examples where they work very well--and I think you hit on one of the factors. The individual guesses don't affect the outcome and aren't being used to hedge anything. I think there are some other factors as well but I'm not sure I've ever seen a particularly persuasive framework for when prediction markets can work and when they don't.

Great comment, I was fairly horrified to open Robinhood and be faced with a prediction market for the US election. I somehow missed completely that this has been legalized: https://amp.cnn.com/cnn/2024/10/02/business/appeals-court-al...

A market can still be well calibrated even if it has a strong causal influence on the topics it’s predicting.

the same dynamic is true of insurance, credit default swaps, and lots of other financial products. this causes trouble from time to time but can be managed if the products serve a useful purpose.

> See also the famous example of how a predictions market for when public figures will die is just an assassination market with extra steps.

Why not just write it so that non-natural causes of death don't pay out? More generally, make it so you can't wager on illegal events / outcomes, or ones where a crime materially affected the outcome. Anyway, if someone bets big on a public figure being assassinated, and then that public figure gets assassinated, it would seem like a good place to start investigating would be to look at the people who made a lot of money from that bet.


"why not just" because it destroys the utility.

There were 2-3 assassination attempts on Trump this year.

It would be valuable to incentive people to share knowledge of assassination vulnerability, to guide security efforts. Banning that defeats the purpose.


In other words, money is speech.

Google’s success wasn’t driven solely by its whimsical culture; the rapid growth of the market played a significant role as well!

> Some questions aimed to predict its competitors' next moves, such as “Will Apple launch a computer based on Intel's Power PC chip?”

Is there a point spread on journalistic accuracy? How do I take the under?


You should make your criticism explicit; just making a joke like this isn't particularly helpful.

To make explicit what I'm assuming is your point, this statement from the article can't be correct because A. POWER is by IBM, not Intel and B. Apple had already launched such a computer back in 1994, before Google existed.

Maybe x86 was meant instead?


the market price gives you a number between 0 and 1, which should move higher as the event becomes more likely, so it's pretty useful.

however interpreting it as a probability, or an average of agent beliefs, or anything like that, seems tricky. i assume these internal markets are not deep and liquid enough that you can just throw up your hands and say "EMH". it works if you assume risk-neutral traders who will just trade up to their correct price but as I understand it, breaks down with realistic traders who may limited capital and are usually somewhat risk-averse.

i wonder how these prediction markets dealt with that. was there any postprocessing of market prices to get final probabilities? based on interviews with traders or observed trading behavior, did the traders behave in such a way that the market price could be interpreted as "pretty much" just a probability?


(Author here.) This is generally checked via calibration charts, e.g. bucketing markets at various points in time into 0-5%, 5-10%, etc; then counting how often the underlying events actually happen. The more they match, the more it's reasonable to interpret the market prices as probabilities.

Google published [1] one such calibration chart on its current prediction market in late 2021. Also, the 2009 paper in the article [2] on Google's first prediction market published one too.

[1] https://cloud.google.com/blog/topics/solutions-how-tos/desig... [2] https://static.googleusercontent.com/media/services.google.c...


thanks for the response! those calibration charts don't look too bad. that's reassuring.

Even if you assume perfectly spherical rational traders with equal fixed budgets, prediction market prices aren't guaranteed to converge to the average belief: https://quantian.substack.com/p/market-prices-are-not-probab...

You cannot really do postprocessing to the market price to get the average belief back out, because the bounds aren't very tight: a market price of 50% could correspond to an average belief anywhere between 29% and 71%.


Great writeup. I think prediction markets will break through when they figure out how to capitalize not only on collective judgment, but super-forecasting.

As currently formulated, prediction market outputs are just a fancy opinion poll, where participants have some incentive for accuracy. To rise above the simple wisdom of the crowds, you would want to identify the subset of market participants that are constantly beating the market (because they have a more accurate mental model of the world). I think this necessitates both 1) long term tracking of bets and 2) likely withholding individual positions from the market to prevent follower effects.

Similar to the title article, this raises the question of who the ultimate customer is prediction market is. Individuals can be incentivized to bet by winnings, but who else is the customer for aggregated data?

I wonder about the extent to which current prediction markets have internal outputs and derivative statistics, and what they might do with it.

If polymarket or similar companies put Trump vs Harris at 55-45, do they have internal statistics that that put the race at 80-20% among their most accurate betters? Was this data for sale?


(Author here).

> To rise above the simple wisdom of the crowds, you would want to identify the subset of market participants that are constantly beating the market (because they have a more accurate mental model of the world).

Identifying, and then working with, the top traders at Google (including one card-carrying superforecaster) was a great joy.

And yes, they're sitting on some great data, on what the employee crowd tends to get right and wrong, who individually is good at forecasting what. Though one complication is that being a great trader is not the same as being a great forecaster.


Great piece, if you write more would love to read your thoughts on what makes a great trader vs. what makes a great forecaster.

I second that!

To the author: if you have a personal blog, can you post that here?


That's very kind of you to ask.

My personal blog is defunct (for now!). But some of my recent writings can be found on the research page [1] of my startup, FutureSearch. We're building an AI that can forecast accurately.

We've written some pieces on topics like the problems with using crowds to forecast, and contesting recent papers' claims of good forecasts coming from simple LLMs.

[1] https://futuresearch.ai/reports


Thanks, this sounds so interesting and timely.

There are some interesting jupyter to blog tools like quarto.org. Or, my Svelte based blogging tool: svekyll.com (I use it to blog about AI/ML because Svelte is the best visualization front end tool).

It's a great time for you to start blogging again!


Markets are what you described. Participants that regularly beat the market are rewarded with more money (and confidence) which lets them bet larger size and have more impact on the market.

Uninformed bets should wash out as noise, and informed bettors should reverse uninformed moves so long as they are profitable.


The distinction that I am drawing is the ability use the betting market generate forecasts with greater accuracy than the market itself.

The stock market analogy would be the predictions you could make as an individual if you knew the internal limits and assessments of the best trading firms, and not just current market prices.

If I could pay the NYSE for real time trading info on the buy/sell limits from warren buffet and other whales, I would.


Many sports gambling companies do this, weighing the bets of "sharps" (people who are more accurate than the average) heavier than other bets. A good example of this was Mayweather vs McGregor where a lot of sharps were betting on Mayweather whereas the public was betting more on McGregor. Even with about 80% of people betting on McGregor, the house still had Mayweather as a favorite.

Thats a different phenomenon than what I was discussing, but a really interesting one. Nate Silver discussed the topic with Tyler Cowen and said many platforms ban sharps, but then look to other platforms that allow them to benchmark their own odds.

I think the analogous situation to what I was proposing would be if a platform had open betting and organic odds, but sold the sharp betting data to 3rd parties.


The problem is that if a prediction market exposed the fact that 80% of their most accurate betters are betting for YES, then wouldn't that skew the rest of the market? If I'm on there and I think it's about 50/50 chances, but I see that the super-betters are mostly saying YES, I'm probably going to vote YES too.

I flagged that as implication #2. They would have to keep the market participants in the dark about it, and find a confidential use for the private forecast.

If polymarket had an internal 80-20 model based on super-forecasters, they wouldn't disclose that, but they might have paying customers for it outside the prediction market itself. For example, stock traders might pay for the private model.

If I could pay the NYSE for real time trading info on the buy/sell limits from warren buffet and other whales, I would.


There is a conflict of interest between the information prerogative, which requires facilitating the law of large numbers and factors stability, and the profitability of gambling, which favours encouraging larger bets and volatility.

My point of curiosity is if these prerogatives can (or have been) reconciled.

To that end, I'm not sure that these factors are mutually exclusive and would like to hear more of your thoughts.

If I understand you correctly, I would think that a market could have both volume and depth.

With respect to volatility-stability, what do you see as the drivers there? Is it that gamblers would need a significant upside to drive betting? Is this solved by the size of the bet? I suppose there is an internal conflict. If the line of a bet is 49-51% on a binary outcome, the risk of ruin is high, and the upside is low. You would need to aggregate outcomes over many distinct events to mitigate.

I suppose this could hang on ability/inclination of professional forecasters to research and take several positions.


> With respect to volatility-stability, what do you see as the drivers there?

Given two prediction markets, one which varies wildly and one which is stable, the former will attract more users. Particularly the most profitable ones. Even if the underlying odds are unchanging, the volatile market is more “fit.”


I understand the claim, just not the reasoning for why that is the case. Is this driven by human psychology or economics of return? Why is a volatile market more fit?

> Why is a volatile market more fit?

More people get to feel like winners for longer [1]. And reward uncertainty makes gambling more additive [2].

For purposes of information discovery, volatility is bad. But for purposes of gambling, volatility is good. Running an information-discovery (or financial) platform is less profitable than running a gambling platform. Herego, operators will optimise their prediction markets for gamblers.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10562822/

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC3845016/


Thanks for clarifying. That line of thinking doesnt mean that a discovery planform can't exist. Even if one grants that gambling is more profitable than prediction, markets segment, saturate, and specialize all the time.

> Even if one grants that gambling is more profitable than prediction, markets segment, saturate, and specialize all the time

It's difficult to see the niche for the academic market.

More profit to the gambling platforms means more money for R&D, customer service, user retention and marketing. That means more liquidity. Gamblers means dumb money, which in turn attracts the smart money: if you're commissioning private polling to place more informed bets [1], you want to place a big bet against dumb money.

[1] https://www.bloomberg.com/opinion/articles/2024-11-07/predic...


Google, can’t live with them, can’t do much online without them

Google hosted multiple internal gambling platforms for employees, and HR, compliance, etc. were okay with that?

(Author here) It's complicated :-). I will say: it's hard to define "gambling" in the corporate setting.

Is it gambling to do an extra project in hope of getting a "Spot Bonus" (~$250-$1000) in recognition? That was a very standardized process at Google.

Is it gambling to file a patent application, which if approved, would lead to a $1000 bonus and a trophy you'd often see on the desks of Google Brain researchers?

Is it gambling to decline auto-sale of RSUs, and have your compensation determined more by movements in GOOG than in your cash salary?


1. No, Google isn’t taking actual money bets from their employees.

2. No, Google isn’t taking actual money bets from their employees.

3. What employees do with their vested RSUs isn’t at all the same as hosting an internal gambling platform. Once they are vested, employees own the stock, they can do whatever they want. One could sell them all and use the money to bet on roulette, which I think is obviously gambling, but that’s also obviously not Google hosting a gambling platform internally?

I’m just surprised that Google hosting a platform where employees gambled was… allowed? This isn’t a moral judgement, I’m surprised that Google was operating a gambling platform internally and legal etc. thought that was a good idea.


I think you are making a lot of assumptions about how the program was run. For example, are you assuming that the employees were betting their own money or exposed to any downside at all?

I assume they were given free credit for the system, and had the chance to turn it into cash bonus if they won.

It is closer to a company giving out a prize for the winner of a free fantasy sports league.


I'm not even sure there was any way to turn credits won into anything fungible. Everyone was given a small amount of credit to start, and it was perfectly allowable to go negative... Afaict, it was almost more of a "long bets" type sentiment platform where employees could "vote" (with their bets) on one side of a wager or another -- where many of the wagers dealt with inside baseball topics.

Outside of this Google case, that’s what predictions markets are, they’re for gambling.

The Google case turns out to not be real money, but it’s weird that the article never said that, and it’s weird that the author responded with three points of whataboutism instead of just saying “it wasn’t real money”.

If someone wrote an article that Google set up a roulette wheel in the microkitchen and employees “bet” on it, yes I’d assume they were running an actual casino and find that weird too!


Semi-related: I had a dream about a law firm that specializes in suing casinos that prey on gambling addicts (in a way that violates the law), and has to make sure that their constant interaction with the gaming world doesn’t re-trigger their clients into old patterns.

Then it occurred to me that there are a lot of major lawsuits that depend on litigation finance, someone to front the cost of the lawsuit and return for a share of the winnings. So you could have a situation where gambling addicts are forced to resort to “their old ways” to get it off the ground!

(And leading to a paradox where, if they can bet on this case in a controlled way … maybe they really weren’t addicted the whole time? Like in the paradox about the lawyer who sues over getting a bad legal education.)


They don't use real money. It's no different than having a poker night with Monopoly money and the winner gets a prize.

Not quite. From the article:

> As Prophit had done, I got approval to pay out valuable prizes to complement the play-money leaderboards.

Some traders won things like iPads, and similar rewards that even highly paid tech employees considered valuable.


I think the point is no money is at risk. I.e. you can't lose money, you can only gain rewards/gifts/cash/etc. That's not strictly gambling.

Yeah, that's right. Though one could fantasize about a corporate prediction market where people are betting part of their yearly bonuses...

Well, that explains it. I didn’t see that mentioned anywhere in the article, but maybe I missed it not being real money. I don’t think there’s any legal risk then.

Yeah, I don't think that was mentioned anywhere -- I assumed it was real-money. That should really be clarified!

It's less of a gambling platform than a way for people to crowdsource information in an unconventional way.

> as of August 2024, the team continues to refine its approach to make Gleangen a useful source of information for Google senior management.

And it is apparently now officially a way to get information to Google senior management.


Polymarket is the most successful prediction market imo

Metaculus and Manifold Markets are both "purer" alternatives



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: