
How I recorded user behaviour on my competitor’s websites - lukestevens
https://dejanseo.com.au/competitor-hack/
======
GedByrne
I’d like to defend this guy. What he is doing is testing the trust mechanism.

If he went to Google and said ‘I think the trust mechanism is broken’ Google
would say: ‘We know, that’s why we are pushing to move everyone to https.’

‘That isn’t enough. The padlock on the https page gives users a false sense of
security.’

‘We don’t agree with that. Where’s your data?’

Google wouldn’t have accepted this. They have pushed full HTTPS hard, and
suggesting that it has a negative consequence is unacceptable to them.

His experiment has proven the problem. How else could it have been
demonstrated?

Ideally this would have been a large scale study done by academics. But this
guy doesn’t have those resources. Nobody is going to fund this research.

The depressing thing here is that everybody is more interested in calling this
guy a jerk than dealing with the issues he has raised.

Trust on the internet is broken. This guy did it with ease. Imagine what is
being done by those who want to scam millions?

But yeh, call him a jerk and then you can bury your unease beneath a big pile
of outrage. It’s fine. Fine. He’s a jerk.

~~~
dejanseo
Thank you. I'm not having a good time at the moment. Anyway, the basis of my
test hypothesis is that people are easily fooled by URL both by HTTPS and
brand recognition (e.g. subdomain) so I conducted a survey which revealed the
very real problem:
[https://dejanseo.com.au/trust/](https://dejanseo.com.au/trust/)

Raw data: [https://dejanseo.com.au/wp-content/uploads/2017/04/survey-
te...](https://dejanseo.com.au/wp-content/uploads/2017/04/survey-
texrixayf2f67gzehkadczhl5m.xls)

~~~
_bxg1
I'm willing to give you the benefit of the doubt and assume you were just
unaware of how things are supposed to be done (reporting exploits to the
vendors privately and waiting for the fix before going public), but man, you
did a fantastically dangerous thing even if it was unintentional.

I'd never condone beating up on somebody on the internet, but I dearly hope
you've learned a valuable lesson here. You've put lots of people in danger of
being exploited. It's not about whether or not _you 'd_ do anything malicious
with it, it's about all the other people who now can because Google doesn't
have a fix out there yet.

~~~
rapind
This is the misconception I can't stand. Where we hold individuals responsible
for a product / companies defect. I thoroughly disagree with the idea that
it's _his_ fault people are vulnerable.

So called _responsible disclosure_ is just a marketing spin term. Disclosing
bugs privately is a favour not a responsibility. All this does is reduce the
risk of bad software decisions. It doesn't solve anything.

How about free market instead? If you run a multi-billion dollar company that
can be hurt by issues like this, then it's on you to make it more profitable
to disclose issues privately. If you can't or refuse to do that, then you're
exposing your company and your customers to risk. Enough with the shunning and
the "responsibility" of individuals which expose bugs.

~~~
EGreg
I sympathize with THIS position. It’s the same blame shifting crap when
“identity theft” becomes your fault, even though any cashier or clerk can
“steal your identity”.

What this marketing spin does is give cover to those who design badly secured
systems.

[http://www.youtube.com/watch?v=CS9ptA3Ya9E](http://www.youtube.com/watch?v=CS9ptA3Ya9E)

Also similar is the “jaywalking” idea, made by car manufacturers to make the
default right of way to cars!

[http://amp.charlotteobserver.com/opinion/op-
ed/article650322...](http://amp.charlotteobserver.com/opinion/op-
ed/article65032222.html)

------
dejanseo
Hi everyone! I did this. It was just a random cool idea I wanted to try. It
worked a little too well and I quickly moved it to a disposable site to test
if the page will get penalised by Google. I got busy with other things and
forgot about it. When I bumped into it again I decided to write about it, for
two reasons: 1) To me it's hard to believe that Chrome would allow for this to
happen in the first place and 2) that Google wouldn't penalise a site doing
this. Well, since the story was published Google tracked down my test page
(most likely by using the source code I revealed on my blog) and completely
de-indexed the whole domain.

~~~
TekMol
Copying someone elses site and tricking their users to use your copy is a
copyright violation and fraud. Nothing cool about it.

~~~
stef25
It's a POC with no intention other than seeing if it would be possible, isn't
it?

~~~
rkangel
While that might mean that it's OK ethically (I'm not sure either way), that
doesn't make a difference legally.

If you go and pick the lock of a random house in your city and get caught by
the police, I very much doubt that the defence "I was just doing it to see if
I could" is going to help you.

~~~
treerock
If you didn't steal anything, what would the charge be?

~~~
code_duck
So anyone can just come walk around inside your house without your permission,
and you think it’s legal and no problem as long as they don’t take anything? I
could see that being the perspective in another culture but it certainly isn’t
how the US works.

~~~
stef25
> you think it’s legal and no problem as long as they don’t take anything

Not only that, they can move in!

Here in Belgium a young couple left the country to do volunteering work only
to hear from friends back home that gypsies had squatted their house. Official
reaction of the mayor of Ghent was "I can't do anything about it ... it's
complicated"

Obviously breaking & entering is a crime but if you're "living" there, only
the courts can kick you out after following all the necessary legal steps.

UK has (had) similar squatting laws but afaik those were mainly (ab)used in
the 90s to throw parties in abandoned warehouses.

~~~
tialaramex
The UK has a lot more defences if your _home_ gets squatted. The rationale is
that now we're considering two parties who both want to live somewhere, and so
the legitimate owner/ occupier wins. Where squatters move into somewhere empty
the court has to weigh up on the one hand property rights of the owner who
left it empty but on the other the squatters desire to have a home. So these
are unequal rights and the squatters may win under some circumstances.

The antidote is desirable for a community. If you don't want squatters in a
building you never live in, let somebody else live there instead. Now if it
comes to it (which it probably won't) any squatters will lose. Lots of places
that somebody owns and might otherwise stay empty have people living in them
for very little rent for this reason. If you've got a good reputation don't
care where you live and don't mind potentially having to leave on very short
notice when the real owner wants it back, you can get very, very cheap rent in
crazy buildings because of this. People live in unused lighthouses, buildings
that used to be part of defence systems, big factories, all sorts of stuff.

~~~
stef25
There's still valid reasons to keep an empty property though.

Maybe I don't have the money to provide safe electrical / water / heating /
fire safety systems. But I also don't want a tribe of homeless people in
there.

I also know someone who's kept a property empty for 10 years. He lived there
together with his wife, she passed away, he moved out and never had the
courage to move out all her stuff.

------
Ajedi32
Surprisingly few comments about the actual attack mechanism here. IMO
discussion of whether the author's PoC was ethical is interesting but far less
important than the question about how to handle the actual vulnerability; this
kind of attack could be used for far more damaging things than just recording
user behavior. (Such as phishing.)

IMO "get rid of the browser history API" (as the article author recommends)
isn't the right solution. The history API is important, as it's the only way
to make the back button work as expected in single-page applications, or in
multi-page applications that don't trigger a full page reload when you click a
link. Rather, I'd suggest the following mitigations:

1\. Require a user gesture for `History#pushState` and `History#replaceState`

2\. Follow Firefox's example and highlight the most important part of the
domain name in the browser UI

3\. Don't label HTTPS sites as "Secure", as this can be misleading (Chrome's
planning to do this starting next month
[https://blog.chromium.org/2018/05/evolving-chromes-
security-...](https://blog.chromium.org/2018/05/evolving-chromes-security-
indicators.html) )

4\. Give the back button a different icon when it's taking you to a different
domain (maybe "Up" instead of "Back"?)

Any other ideas?

~~~
yashap
Another possibility - if the referring page is a different domain, overriding
back is ineffective (the browser just does a “real” back in these cases).

~~~
Ajedi32
I don't understand. Are you suggesting that if you arrive at a site from
Google, the history API should just not work?

For example, let's say a user arrives at a single-page application from
Google, and clicks a link on that page to get more information. The site adds
a history entry with pushState, but doesn't reload the entire page. Are you
saying that in this case, when the user clicks back they should get sent back
to Google instead of to the site's home page? If so, that seems like rather
unexpected behavior. And if not, isn't the attack still viable?

~~~
yashap
Hmm touche, this wouldn’t fix much

~~~
3pt14159
No; you were right. The browser needs some AI to be smarter but this is all
still possible.

------
lukestevens
This is an interesting yet disturbing case of blackhat SEO and phishing, where
the site owner hijacks the back button and sends visitors to fake sites where
he can observe their behaviour.

FTA:

 _Here’s what I did:

1\. User lands on my page (referrer: google)

2\. When they hit “back” button in Chrome, JS sends them to my copy of SERP

3\. Click on any competitor takes them to my mirror of competitor’s site
(noindex)

4\. Now I generate heatmaps, scrollmaps, records screen interactions and
typing._

~~~
mrob
Yet another reason to browse with JS disabled by default.

~~~
King-Aaron
That's a reasonable course of action until you need to use the internet for
_pretty much anything_.

~~~
JdeBP
Experience teaches that that is a vastly exaggerated statement. There remains
quite a lot of the World Wide Web that does not require Javascript.

And of course it is pretty much not required at all for using the Internet
outwith the World Wide Web.

~~~
kmbriedis
And then classic React enters the building

------
chatmasta
Somewhat related, Google AMP is also destroying the ability for users to trust
URLs. In fact it’s kind of the inverse problem; the URL bar says google.com
when the user expects to be on another website. I wouldn’t be surprised if
observing the AMP pattern subconsciously made users less suspicious of the
trick in OP.

It’s also a bit rich to see all the outrage here and deranking by google,
since hijacking/proxying to sites in search results is _exactly what AMP
does._

~~~
joshuamorton
As far as I know, site owners essentially have to opt-in to AMP by restricting
themselves to a subset of not-exactly-standard web design methods (and may
need to explicitly opt-in, I forget).

So I don't see a way to call AMP hijacking, since its done with developer
permission.

~~~
technion
My blog was built when AMP was new and looked cool. It was done with my
permission - but the content was not vetted by Google in any way.

People have told me they asked Google to take down my blog, because they
thought it was hosted there.

------
nkozyra
I don't understand why you would have been expected to report this to Google.
It's not an issue or bug with Google, it's a simple gray hat social
engineering trick.

People linking to fake sites as a dark pattern is nothing novel, you just did
so too capture analytics instead of, say, installing a virus or taking
someone's credentials. That said, you certainly could have done the latter and
gotten views into your competitors' user portals. In my head that's not
fundamentally different or more unethical from what you ended up doing.

I don't necessarily begrudge you for trying it, but I don't think it's for a
noble reason nor do I think it was particularly innovative and the end result
is Google doing something unsurprising.

~~~
UncleMeat
The expectation isnt to report to Google. The expectation is to not do this on
live sites affecting real people.

~~~
nkozyra
the reference is to text in the article, not comments from HN.

From the second paragraph:

> Many are suggesting the right way is to approach Google directly with
> security flaws like this instead of writing about it publicly.

------
nothrabannosir
For context: Firefox greys out anything that is not the "real" domain, which
remains black. So:

google.com.fakesite.io/foobar

becomes:

(grey "google.com.")(black "fakesite.com")(grey "/foobar")

This makes it at least a little more obvious you're not on Google.

Although that's still a tricky one for non technical users to protect against.
Aside from EV, I can't immediately think of anything else a browser could
systematically do to guard against this, to be honest. Blacklists etc, but
that's very unsatisfying.

It's a pretty old problem, to be fair. I remember almost being phished this
way myself back on Myspace, were it not for Firefox's blacklists catching the
form submission.

Domain names being little endian has been one of the most expensive web sec
mistakes in history.

~~~
Invictus0
> Domain names being little endian has been one of the most expensive web sec
> mistakes in history.

Can you clarify what you mean by this?

~~~
SquareWheel
Presumably that authority works from right-to-left. .com, then domain, then
subdomain. It would be easier to gauge trust if it were left-to-right.

~~~
Invictus0
Ah I see, thank you.

------
3eto
> Record actual sessions (mouse movement, clicks, typing)

> I gasped when I realised I can actually capture all form submissions and
> send them to my own email.

How many bad actors have been doing the same and for how long? This doesn't
sound like something Google should just brush under the carpet and expect no
one else is doing it. Although I wish the author had reached out to Google
first to see how they would have handled it, I thank him for publishing it.

~~~
dejanseo
You're welcome.

------
Syzygies
The big issue here is: Who does our browser work for?

People worry that self-driving cars will take us to "promoted" coffee, if
we're not specific. More generally, software agents as a rule are loyal to
their creators, not to us. That we put up with this is absurd.

Browsers should be intelligent agents that are entirely loyal to the person
browsing. For example, no site should be able to tell whether we see ads or
not. As one site-by-site option, process the ads exactly as if they're
reaching our senses, but don't actually render them so they reach our senses.

Not even having a back button loyal to us? That's obscene. Copyright
infringement is the MacGuffin in this movie; the real story is that we're
wusses for having totally lost this balance of power struggle in our personal
software.

~~~
prepend
OSS is our best bet because the users can be the creators. Firefox is a mixed
bag on this.

What I want is like the equivalent of fiduciary duty [0] but for AI and
software. This is why I don’t like the idea of “free” agents driven by ad
revenue.

Currently I have to manually review and build my own stuff. Not sustainable.

[0]
[https://en.wikipedia.org/wiki/Fiduciary](https://en.wikipedia.org/wiki/Fiduciary)

------
encoderer
Disturbing, fascinating, obvious in hindsight.

Here’s another angle: a “bounce” back to google too quick is a negative
ranking signal. By keeping them from going back to google by making them think
they in fact did makes this also black hat seo.

~~~
lallysingh
But Google doesn't see the bounce back. The site bounces back to a copy of
Google's result page.

~~~
bagels
That is what the parent comment points out. He benefits from the fact that
users can't return to google.

------
paulryanrogers
Why do browsers allow changing the back button history before the visitor
arrived at the domain? Seems like a subtle cross origin attack if that is
truly what's happening.

~~~
lathiat
I can imagine you could work around that issue just by once redirecting onto
your own site first.

On the surface, sounds like a difficult problem to solve safely. On a related
note, I often have the back button not work because I hit back and chrome
cached a redirect to some other page and it immediately redirects again before
i can even spam back again. Need to long press back to get a longer history to
go back further.

This is a really interesting "attack" to see.

~~~
lathiat
This is interesting knowledge:
[https://smerity.com/articles/2013/where_did_all_the_http_ref...](https://smerity.com/articles/2013/where_did_all_the_http_referrers_go.html)

------
yjftsjthsd-h
I'm surprised he's willing to put his real name to this. I can't immediately
see that it's actually illegal, but it still screams red flag for unethical
behavior.

~~~
adtac
Not just his name, his whole company even!

------
saintPirelli
How does a person get so much flak for hacking - on Hacker News?

~~~
wu-ikkyu
Maybe because we're talking about Google? Seems like whenever Google is called
into question on HN I've noticed a lot of appeals to authority and people
defending them to a fault.

------
hw_penfold
In many ways this is malicious deception. In any instance where a login form
is included in the scraped mirror, that represents an attacked user, and a
phishing attempt.

If someone did this in the wild, in an uncontrolled situation involving random
strangers, it risks serious misinterpretation, and worse.

~~~
bhelkey
> I had this implemented for a very brief period of time (and for ethical
> reasons took it down almost immediately, realising that this may cause
> trouble)

The author did this in the wild, involving random strangers.

------
markdown
While we're on this topic, I have a related situation and wonder if my case is
common:

I built a brochure site for a mom-and-pop business a decade ago. The domain
expired some time ago, and it was snapped up by someone who repopulated it
with the original content scraped from the Internet Archive. It looks and
behaves exactly like it did when I controlled it, except that a phrase in the
frontpage content now links to some supplement sale site.

Is there a name for this SEO bullshitery? What can someone do who isn't
American and who therefore can't file a DMCA.

~~~
brosirmandude
Sounds like that old site got bought by someone building a PBN (Private Blog
Network).

They buy old domains, get the old content from archive.org, and then add a
link in somewhere to their "money" site, or to another site in their tiered
linking structure.

It's a BS tactic that can sometimes still work, but it's a LOT of effort to
really keep up with it. TBH it's much easier to just actually make a site
people want to use and reach out to people who might be interested in sharing
it.

Hosting/managing 100's of sites just to prop up 1-2 money sites is too labor &
time intensive for most of us. That said, there are some people making good
money still using these tactics, as shady as they may be.

------
jackgolding
I've seen Dejan speak and I'd recommend following his work because he does
very interesting black hat things like this in SEO. He has so many out of the
box ideas like this which are brilliant.

------
mixedbit
Long time ago I wrote about this technique:
[http://mixedbit.org/referer.html](http://mixedbit.org/referer.html) Besides
back button navigation, I also had ideas to use a fake malware warning or just
take a victim directly to fake search engine results.

------
digitalboss
Update from site "Google’s team has tracked down my test site, most likely
using the source code I shared and de-indexed the whole domain."

~~~
londons_explore
They don't up/down ranking individual sites for stuff like this. They've
probably implemented back-hijack detection for the whole web

~~~
dejanseo
I just added a screenshot from search console:
[https://dejanseo.com.au/competitor-hack/](https://dejanseo.com.au/competitor-
hack/)

There's no manual penalty notice.

------
nsmog767
This is easy to hate on, and certainly ethically dubious....but man do I love
it.

------
caffeinated_me
This seems to have some fairly scary security implications if used
maliciously, but I can't think of a good way to protect against this.

Does anyone know of a browser extension to limit access to the history API?

~~~
htgb
I started using NoScript a while back, just to see what the web is like
without Javascript. My plan was to uninstall it when it got too annoying, but
to my surprise it's actually not bad at all. I'm quite lax in whitelisting
domains I actually trust, but even then it's nice that it doesn't load
Javascript from all other umpteen domains, which is often the case.

Of course it's a _very blunt_ weapon for blocking abuse like what's described
in this blog post, but for sure it works.

------
jeswin
A couple of years back I was talking to someone who did SEO for a popular
education network. The company was spending millions of dollars every month on
SEO and advertising.

Their module operandi went like this:

1\. Offer money to license or buy a smaller competitor's content

2\. If that doesn't work, crawl and clone the site

3\. Pump a lot of money into Google Ads, so that the cloned site now appears
as an ad above the legitimate site. Google makes such scam easier now by
making the ads look like organic results - a non technical user would hardly
notice.

4\. The legitimate site just dies.

I was asked to build a tool which crawls sites, which I refused. But I learned
how professional SEO works.

~~~
stef25
Sorry, calling BS on this one. By simply copying a site you get flagged as
having duplicate content.

Then there's DMCA. I've seen an e-commerce site's homepage get de-indexed,
killing the business, due to 1 single image being used for which the site
owner didn't have copyright.

SEO undoubtedly has many shady practises, but "professional SEO" is actually
really difficult and involves much more than cloning competitor sites and
somehow getting away with it.

~~~
dorgo
"duplicate content" is a problem for both, the original site and the copycat.
But the copycat doesn't rely on SEO in this example. It just buys traffic in
AdWords. So the duplicate content penalty would harm only the original site.

~~~
stef25
Google must surely be able to tell the difference between the original site
and copying site because of timestamps. How would an Adwords campaign change
that?

~~~
dorgo
Maybe Google can tell the difference, but, as I did SEO some years ago, we
didn't relyed on it. Duplicate content was considered a problem regardless of
who published first.

Duplicate content is a problem for organic rankings. In payed search it may be
a problem for the quality factor (not sure). But even if it impacts the
quality factor you just have to pay more to achieve the same result.

------
megous
Is this kind of blatant censorship, where Google delists information it
doesn't like common? It's not like the experiment was ongoing, is it.

------
gcb0
fun fact: you can do the same thing again, but use the AMP version and call
yourself an amp-provider, just like google does.

technically they wont be able to complain because you can say providing amp
content assumes they want to be served by you, and you can fiddle as much as
you want (e.g. adding tracking code) just like google does when it serves
someone else content as amp.

------
gus_massa
I just realized that it is not necessary to hijack the back button!

1) Watch out for users coming from Google (or Bing) using the referrer field.

2) Choose randomly In 5% of them are redirect them to your shady domain using
a temporal 303 redirect. [If Google notice this, they will hate you.]

3) Host a copy of your competitors page in the shady domain, with all the
tracking enabled. [This is illegal! You may get a lawyer C&D, nastygram or
more.]

I guess that when the user finds your site in Google and click the link, they
will most of the time not be sure of with link they choose, so they will not
notice the change. And if they realize that they went to the wrong site, they
will click the back button and click the search result again, and get the
normal page like the 95% of the people.

This is probably more credible if the search field in the referrer doesn't
have your site in it, so the user is looking for any generic site that
includes you and your competitors.

As I said before, this is shady and some parts are illegal, so don't do it.
Google may demote your site, and also you can get some legal problems.

------
tralarpa
This is really a good example why it is so difficult for security experts to
do research and experiments where real users are involved.

What Mr. Petrovic did is illegal in most developed countries: copyright
violation (copying web pages) and monitoring and storing user behavior without
their consent (and, even worse, by phishing). It doesn't matter that he did it
for a "very brief period of time (for ethical reasons)". LOL. If I tried this
kind of stuff where I work, I would have a long unpleasant talk with our
legal/ethics department afterwards. I cannot even do a network scan in the
Internet without first notifying God and a couple of lesser gods.

I am also wondering whether that's good publicity for the author's company.
The author is basically saying: "We are doing things without being fully aware
(or without caring) of the legal consequences. Are you sure you want to be our
customer?".

~~~
dvfjsdhgfv
When you do security work, that's an important part of your job. Sure, in many
scenarios like traditional pentesting you can probably do fine within the
legal boundaries in most jurisdictions, but as soon as you do serious security
research when you actually test your ideas in practice, you're likely to cross
the line sooner or later. It's a difference between "it should probably work"
and "yes, it worked, I tried it." If you're afraid of the latter, don't get
involved in security as you'll get burned sooner than later.

~~~
tralarpa
> you're likely to cross the line sooner or later.

That's basically the opposite of what security researchers working for
companies and research institutes are doing. Document everything, get written
consent of involved parties and sometimes even inform the police about a
planned action. Make sure that you (a) don't cross the line or (b) move the
line legally further away.

Of course, there are security experts who don't care about that. But they
usually don't publish their results on a website with their real name.

~~~
dvfjsdhgfv
I'd argue the most interesting and important research is done in this way.
It's not that these security experts "don't care", it's just the very nature
of certain problems that you need to test them against real users (as opposed
to, say, testing an exploit against a system). Consider, for example, honeypot
research the very nature of such scenarios is that you can't even hint that
users are tracked, let alone asking their consent.

~~~
tralarpa
The legal aspects of honeypots were discussed a lot when they became popular.
Just two examples:

[https://www.symantec.com/connect/articles/honeypots-are-
they...](https://www.symantec.com/connect/articles/honeypots-are-they-illegal)

[https://www.researchgate.net/profile/William_Yurcik/publicat...](https://www.researchgate.net/profile/William_Yurcik/publication/3955168_Internet_honeypots_protection_or_entrapment/links/0deec53b570e0f0b3e000000.pdf)

And they are still discussed, for example in the light of the new EU laws:

[https://jis-eurasipjournals.springeropen.com/articles/10.118...](https://jis-
eurasipjournals.springeropen.com/articles/10.1186/s13635-017-0057-4)

~~~
dvfjsdhgfv
And they show quite well you can't have the cake and eat it. For example,
Spitzner suggests displaying a banner... With all due respect, it's
ridiculous. The whole point of this game is to make the attacker believe
they're attacking the real system, not to make sure they "waive their privacy
rights." I don't think anyone serious about really analyzing the behavior of
attackers would ever care about these things. What is more dangerous is if a
honeypot is used to attack another resource and you're sued by the owner, for
example. It's really hard to avoid breaking a couple of eggs, no matter how
you try.

------
nmstoker
What's concerning is that the post author seems not to see the problem with
trying to sit on both sides of the fence at once.

As others have said, the way this was done is likely to be against numerous
laws in most major jurisdictions. If you wish to do this as a PoC then simply
put a notice up on the page that initiates it and use dummy "competitor"
content, so you've got some semblance of user content/transparency without
copyright infringement. That would work just as well for flagging it up as a
concern to others.

Or if being up-front about it is not the side you are on, do this fully
admitting that it's wrong and face any consequences (it doesn't sound like
this was the post authors aim, esp given follow up comments).

For a "very brief period of time" doesn't cut the mustard here, just as it
wouldn't with briefly stealing something from a bank or briefly kidnapping
someone (both crimes where one could sometimes argue there may not be
permanent damage, although even that likely isn't true in many cases)

~~~
ddalex
The problem I'm seeing is not that the author did something un-ethical (there
are plenty black hats out there with no such concerns), but that the content
can modify the browser chrome behaviors, and that the users trust the browser
chrome a lot more than the content (as it should).

As a workaround, I recommend using separate Firefox containers for big sites,
as the big sites are the main attack surface of a lot of people. I.e. Firefox
containers for Google, Facebook, Microsoft. This attack would be stopped by
using a Google container as the back button will not work once you step out of
the container to go to the result page.

Sure, it won't help you on a targeted attack, but will help a lot with this
kind of drag-net attack.

~~~
lysium
I don’t think your proposed workaround would help. The user stays on the
malicious domain. Unless your container is clearly marked visually, only the
url shows the savvy user that something is off.

------
everydaypanos
1 Years years ago when I was learning web development I bought a TLD and just
copy-pasted Amazon’s log in page to just check how it works. Amazon somehow
found out about this and Google punished that TLD after that incident and it
just couldn’t go up in rankings after that.

If I remember correctly they had even put that TLD on sites that report/list
“phishing” sites so if you Googled about that TLD you would also get the “they
are fraud” results.

2 I think that most pro users just New-Tab everything and go from there. Seems
to me that going in n out search results all in one tab is kind of slow too.

~~~
pawal
You bought a TLD when learning web development? That seems extreme.

~~~
gammatrigono
This must be sarcasm.

~~~
trowway21
This is certainly sarcasm.

~~~
solarkraft
Yesterday I was learning about processors, so I bought a foundry.

Doesn't everyone do this?

~~~
gammatrigono
Misread "TLD" as "domain." Whoops.

------
glandium
One more reason to kill the referer HTTP header, I guess.

~~~
yborg
[https://github.com/meh/smart-referer](https://github.com/meh/smart-referer)

------
Zalastax
> I had this implemented for a very brief period of time (for ethical reasons)
> and then moved to one of my disposable domains where it still runs after
> five years and ranks really well, though for completely different search
> terms.

Am I reading this correctly? He's been doing this since 2013 and still wants
to use the white hat card?

~~~
Zalastax
Instead of downvoting I'd love if you could reply instead...

------
danvoell
Interesting hack. Sorry about the whole google de-indexing thing. My question
would be, did you really gain any useful insights? From competitors, you can
normally figure out which page is their most viewed and then figure out how
they merchandise it on their homepage. Without "hacking" it.

------
sattoshi
Just remove the push state history api. Set state is fine.

Push state is totally unnecessary since we already had a technology for this:
anchors!

Instead of site.com/my/page, it's site.com/#/my/page.

What is wrong with this? It does literally everything you need and is
supported by most routers out of the box!

------
tabtab
If Chrome outright disables JavaScript's ability to alter the "Back" path, it
may brake some (poorly designed) applications. A compromise is to prompt with
a warning.

~~~
jstarfish
Break them then, with no warning. Letting Javascript trump browser controls
(or spam confirmation popups) is a problem that never should have been allowed
to live beyond the 90s.

~~~
tabtab
But changing the behavior suddenly can make existing applications outright not
function. That results in angry customers. A prompt is a decent compromise.
Example prompt: "This website has altered the web address of the Back button.
This can be risky. Do you want to use the application's version of the web
address, or the original address? [Altered address, Original address, Cancel
'Back', More-info]"

------
wu-ikkyu
Will Google be pushing out a fix for this vulnerability?

------
aquarin
It seems my habbit to open google links in new tabs with right click have more
meaning now. I initialy used this to avoid referal information.

~~~
wiether
Doesn't change the referer but avoid you falling in that current trap.

Anyway, using a new tab for each new website you visit is the way to go I
think.

~~~
rellui
Maybe it's a good trade off for this to become default behavior in browsers
(in the background unseen by users).

~~~
dennisgorelik
If user does not see that she is operating in a new tab, she can still click
"back" and would still be vulnerable to the "Fake Google Serp" trick.

------
slim
Bonus : you steal some ranking clout from competitor since they don't get that
precious click on Google search results

------
jhoh
That huge fixed navbar on mobile is just horrbile. Can't read the article
because of it.

------
moltar
That’s genius

------
bo1024
And people still think javascript is a good idea...

------
pasta
Presenting yourself as someone else is called fraud in my book.

Changing the back button might be clever but all the rest is just simple. But
people don't do this because I think in a lot of countries this is illigal.

There is a way you can protect your site a little from this: add canonical
tags to all your pages. When an attacker updates the back button to Google
they will have a hard time getting the cloned pages up in the results.

~~~
cm2012
In this case he NoIndexed the clone pages anyway.

------
shruubi
Honestly, it doesn't shock me in the slightest that someone who markets
themselves as an SEO expert would not only do something as unethical as this,
but also brag about it, as though they think they've done something they
should be proud of.

~~~
cm2012
Is this that different than publicizing bugs? He tested it for a small amount
of time, noticed a real security vulnerability (he could collect leads), and
publicized his findings knowing Google would likely punish him for it.

It's mildly unethical at worse, considering he could have happily done
profitable leadgen at scale and it would have likely never been caught if he
kept quiet.

~~~
londons_explore
Except he would have been caught by a few of his users.

If I notice a scummy page impersonating Google, I'm gonna alert Google so they
can do something about it. (For example add the page to the safebrowsing list)

~~~
cm2012
Google very very rarely does anything with individual complaints sent in with
their automated systems. You'd have to get a post to the top of HN.

