

Simple tests can overstate the impact of search-engine advertising - friism
http://www.economist.com/news/finance-and-economics/21581715-simple-tests-can-overstate-impact-search-engine-advertising-ad-scientists

======
mjn
The main paper they're basing the headline on is this one (kudos to them for
actually linking it):
[http://conference.nber.org/confer/2013/EoDs13/Tadelis.pdf](http://conference.nber.org/confer/2013/EoDs13/Tadelis.pdf)

That paper concludes that a substantial proportion of conversions via search-
engine ads seem to be sales to loyal users who convert via another channel
even when you take away the ads, so the conversion rates themselves may be
significant overestimates if what you really want to measure is the net impact
of advertising.

~~~
decryptthis_NSA
The most obvious ones are the "eBay shoes" type of searches, but probably they
cost less than "shoes". Still it's a cost. Expect Google, Yahoo and even Bing
to fight this tooth and nail, data or no data.

------
guybrushT
I couldn't appreciate the point this is trying to make. Especially I can't
understand what the article tries to say at the very end--"But it also shows
how the online world is getting closer to solving the conundrum he posed. Far
from being an industry where cause and effect remain murky, online advertising
may yet become one area where the dismal science can predict how to get costs
down and profits up." Can someone please clarify what these lines mean?

What I took away from this is that paid links (what the author calls internet
advertising or search-engine advertising) aren't necessarily more effective at
bringing sales leads as “organic” search links. Maybe I am missing something,
but is this "finding" a surprise?

~~~
rm999
Yeah, the article is kind of sloppily organized and laid out. Here's the main
message I got from the article:

In online advertising you want to measure your success through the rate of
causal events, i.e. I advertised to someone and he converted (e.g. bought a
product) because he saw the ad. Many advertisers use a correlative proxy for
this by looking at how many products they are selling vs how much they are
advertising (I advertised more and sold more, so my advertising is effective).
The problem is there is a confounding factor that is unrelated to the
effectiveness of an ad that causes people to both see more ads and buy more of
your product, called the activity bias. The more actively people are surfing
the internet at any given time, the more ads they will see, but also the more
shopping they would have done anyway. In other words, a lot of the increase in
sales are not being cause BY the ads, but for the same reason there are more
ads being shown.

The solution is a control group of people who are never shown an ad. During
higher-activity times both the control group and the in-group will be
purchasing more products, but if an ad is truly effective the in-group should
be purchasing even more than the control group. Without the control you can't
measure effectiveness because you don't have a ground truth.

------
drsim
Erm... I hate to say this about an Economist article but it seems to be hot
air: completely ignoring conversion attribution. The headline numbers are
aggregate revenue from ad spend.

I'm intrigued about the 'lumpy behaviour' paper they refer to and will read it
later. It'll be interesting to see how the writer interpreted it for the
article.

~~~
rfergie
I would say that one of the main benefits of the approach used by the
economists studying this is that it completely sidesteps the whole issue of
conversion attribution

------
boomlinde
"Bears can shit in forests"

~~~
kposehn
"So can popes in the Vatican"

