
Algorithmic Pricing and Competition: The Small-Town Gas Station Example - frgtpsswrdlame
http://conversableeconomist.blogspot.com/2017/05/algorithmic-pricing-and-competition.html
======
vmarsy
Very interesting article

> Finally, and most critically, there is complete price transparency because
> everybody can see the prices everyone else charges just by looking at those
> big signs. If we take away any one of those facts, the whole thing will
> generally fall apart on its own. For example, if firms could somehow
> secretly discount and steal market share from their rivals, they have a
> significant incentive to do that and so on. ...

This is why many big retailers use consumers as a watchdog, typically with
offers like "we guarantee you the lowest price, if you find cheaper elsewhere,
we'll give you back the difference times 2!". That way consumers think the
retailer is having their back and protecting them when actually they prevent
other retailers from lowering prices by keeping the market share. Consumers
should not tolerate that bullshit and go with the lowest retailer in the first
place and realize that they're being used to rat on the cheaper retailer...
but there's a good incentive to get the difference * 2.

~~~
daurnimator
That only works on goods that have a high margin. Many times a company does
that with a product, a competitor will notice and start selling the product
at-cost: hence causing their competitor to loose money on the deal. Of course
some then take that hit as a loss-leader, but that only works if the products
are cheap. That why you'll often see "we guarantee you the lowest price, if
you find cheaper elsewhere * excluding apple products * excluding online
stores" etc. which I think most consumers notice and end up disregarding the
"price guarantee" => however some don't, which I have a hunch might be the
most profitable customers anyway....

------
Unkechaug
> My guess is that some point there will be an antitrust case featuring the
> "algorithm defense," which basically says: "Hey, I just set up the smart
> learning algorithm and let it run. How could I know that it would interact
> with other smart learning algorithms in a way that led to collusion?" And
> the antitrust authorities (or other law enforcement) will need to argue that
> when a guy named Bob sets up and signs off on an algorithm, Bob needs to be
> personally responsible for what that algorithm does.

How do you guys think this is going to be handled? It seems very complicated,
because how are we even going to know which algorithm is at fault, or even IF
any can be conclusively determined to be at fault?

Furthermore, is he talking about Bob the implementer or Bob the approver? In
many organizations the people actually writing the code are not the same
people who determined the specs and asked for it. Bob the implementer may have
been given a limited spec that is followed, unknowing there is potential that
an outside influence can manipulate the results. At that point who holds the
responsibility?

Maybe it's just me, but recently I have noticed increased discussion regarding
ethics in tech. It seems like there is growing awareness that all the focus on
"how" to do something has eclipsed the fundamental question of "if" we should
do it. This isn't to say we should just shy away from learning algorithms and
complex systems, but maybe we should be put more of an effort to discuss the
outcomes and how we agree to handle them before charging full steam ahead into
uncharted territory.

~~~
dogruck
What about a more socially sensitive domain, such as college admissions,
hiring, or setting pay? What if you put such an algorithm on said task -- to
avoid human bias -- and later observe that the algo, say, does not hire <pick
your group>?

~~~
maksimum
Instead of Institutional Review Boards approving experiments we could have
Algorithm Validation Boards. Hold out a portion of your dataset for
validation, and have the algorithm designers examine the outcomes _in
communication_ with ethicists, lawyers, and business strategists.

I don't think the two hands can properly "validate" algorithms without
communicating. The algorithm designer can maximize AUC, but what if one
<group>'s class is 95% label A; the designer always predicts A for <group>.
How bad is ALWAYS missing 5% for label B? If you can put a price on it, then
the developer can build it into the algorithm. But if the price is difficult
to accurately estimate, or non-monetary qualities are desirable, it may be
hard to build them into the classifier ahead of time. On the other hand if the
cost of perfect <hard to quantify criterion> reduces AUC significantly,
algorithm designers need to communicate that...

------
steventhedev
I enjoy the analogy of the small town gas stations. But I think it falls short
in describing the problem of price segmentation, where a company offers
different prices to different customers. Some countries have gone far in
regulating sales as a response to this, as firms would set prices extremely
high and offer a perpetual sale (furniture and cosmetics are really bad with
this).

What worried me is when marketing signals are used to raise prices for me
above what they should be. Airlines do this when you are searching for
flights, based on browser, OS, and other signals.

However, if we ban price segmentation altogether, then we risk preventing
businesses from offering sales, which serve some business and social utility
(to borrow the term from the speech). But it's worth spending time thinking
about what is ok and what isn't. Chances are that some businesses are already
doing what isn't ok.

------
jogundas
I find it interesting that conditions unfavorable to the buyers can arise in
various ways without explicit communications between the sellers. For example,
this happens if the sellers update prices faster than the buyers can compare
them: [https://phys.org/news/2012-06-high-gas-prices-self-
organized...](https://phys.org/news/2012-06-high-gas-prices-self-organized-
cartel.html) .

------
nwrk
Personalized pricing, with great details described how it works in retail in
era of face recognition, mobiles, web tracking,loyalty cards, data mining
etc.etc.

Recommend the read to anyone

The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy

[0] [https://www.amazon.com/Aisles-Have-Eyes-Retailers-
Shopping/d...](https://www.amazon.com/Aisles-Have-Eyes-Retailers-
Shopping/dp/0300212194)

------
codeflo
The gas station scenario has a lot of similarities with the iterated
prisoner's dilemma game; "defecting" here means lowering the price. The
rational strategy is for the owners to cooperate (and retaliate if someone
defects). In that case, the effects of price collusion can happen even without
any actual collusion.

In reality, human beings rarely implement a strategy that's game theoretically
optimal, but algorithms might.

~~~
colechristensen
With algorithms you can "actually" collude without humans conspiring with one
another.

If pricing is public humans (or algorithms) can reverse engineer competitors
using the public pricing data, the communication channel for that collusion
_is_ the pricing data. Multiple parties doing the equivalent of prisoners'
dilemma cooperating independently and secret from each other by reverse
engineering each others pricing algorithm is exactly the same as colluding in
a smoke filled room.

------
jakozaur
Economist also got article on that: [http://www.economist.com/news/finance-
and-economics/21721648...](http://www.economist.com/news/finance-and-
economics/21721648-trustbusters-might-have-fight-algorithms-algorithms-price-
bots-can-collude)

