
The Verge Hack, Explained - dzgoldman
https://blog.theabacus.io/the-verge-hack-explained-7942f63a3017
======
danielvf
"Exactly how the largest protocol-level hack of a cryptocurrency in recent
memory could preceed said cryptocurrency increasing in price and then
announcing a partnership with the most trafficked porn-site on the internet is
a question I’m forced to leave open-ended; my personal pet-theory is that it
has something to do with the fact that the world makes no sense and human
beings are all completely out of their goddamn minds."

~~~
TheDong
Please don't quote portions of the article with no added commentary or
insight.

We're all capable of reading and recognizing well-written portions of the
posted content.

~~~
sean2
This was a particularly interesting bit though and I was hoping some people
might explain/comment more about it.

It seems that getting your crypto-currency hacked gives publicity and boosts
value! See also: the Experian hack.

~~~
SlowRobotAhead
Clearly any plans with pornhub.com would have been made prehack. As to the
value of the cryptocommodity (I refuse to call them currencies)... it’s like
the author said.

------
tromp
This hack results from a basic design error: for difficulty adjustment to work
properly, the window of recent blocks it considers must be relatively immune
to time stamp manipulation. In other words, the allowed timestamp drift must
be much smaller than the adjustment time window. Bitcoin allows a 2 hour
drift, but has an adjustment window of 2 weeks.

In verge's case, both are identical, at 2 hours. This is an open invitation to
difficulty adjustment abuse.

~~~
gtrubetskoy
> In verge's case, both are identical, at 2 hours.

Naive question - is verge adjustment window defined in block time (as opposed
to "human" time)? because

> Bitcoin allows a 2 hour drift, but has an adjustment window of 2 weeks.

The adjustment window is technically 2016 blocks, which is about two weeks in
human time, but can in theory vary greatly if mining capacity fluctuates.

~~~
tromp
I went looking through the source code and found that the adjustment window is
indeed defined in block time as PastBlocksMax = 12 while
nProofOfWorkTargetSpacing = 150 (half a minute for each of 5 different pow
algorithms), giving an expected adjustment window of 12 * 150 = 1800 seconds.

Apparently, the adjustment window, which needs to be way longer than the
allowed timestamp drift, is in fact way shorter...

------
w0mbat
The gate metaphors in the article are so messed up they distracted me from the
actual story.

    
    
      >> Gates Only Work When They’re Raised
    

Well normally gates swing open and shut, maybe you mean some kind of
portcullis, like in a castle? You have to raise those to open them.

    
    
      >>  a gate that’s far too strong to break 
      >>  through and too high to climb over — 
      >> this hack gets past it by finding a way 
      >> to lower it so close to the ground that 
      >> it can be stepped over
    

So now gates rise out of the ground? I don't think the author has seen an
actual gate.

~~~
tadah
You're holding it wrong. Think:
[https://en.m.wikipedia.org/wiki/Floodgate](https://en.m.wikipedia.org/wiki/Floodgate)

------
kang
One can always invent blockchain buzzwords like "Dark Gravity Wave" but even
copying code requires understanding of algorithms.

~~~
keyle
I snorted when I first heard the term. An old co-worker used to call
'universal differential', for weeks. Yeah that was IF statement.

------
sterlind
This hack underscores the need for formal modeling systems like TLA+ or Coq.
Essentially, you define _successor state axioms_ , where:

1\. start with a seed state.

2\. non-deterministically choose the next step (e.g. submit a tx)

3\. ask the system to prove the inverse of an invariant you want to check
(e.g. hash power cannot fall below X% of total network power.)

Decentralized protocols have bugs; trust only what you can prove.

------
gepeto42
Not sure who said it first, but cryptocurrency truly is the Internet-wide bug
bounty.

~~~
hudon
What does that mean? How is it different than Google being an Internet-wide
bug bounty?

~~~
mido22
it means, you wont get magical internet money when you hack google

------
bronson
"Reality, however, often has a way of stubbornly refusing to conform to the
axioms of free market economics."

Great article.

------
49bc
> _Indeed, major cryptocurrencies like Bitcoin and Ethereum have maintained
> their security quite well — better, arguably, than any other digital asset
> /payment system in history_

Am I missing something? And here I thought having to hard fork your currency
goes in the “not very secure” bin.

------
SlowRobotAhead
Can someone who touts the “but it’s decentralized!!” aspects of these systems-
please to explain to me how an algorithm that adjusts mining difficulty is not
a centralized and possibly destructive feature in control of a handful of
people?

------
rando444
I'm very curious about the price increase.

The only thing that comes to mind how something like that would be possible is
if the news trends generated by this were somehow driving algorithmic trading
.. either that or people investing in something they're just hearing of for
the first time.. But even these possibilities don't seem rational.

It just doesn't seem to make much sense... unless the whole thing was being
driven and pushed by some larger unknown actor with a lot of resources ..
(state government, goldman sachs, etc.)

~~~
SlowRobotAhead
But what lunatic reads of the hack - then invests!?

~~~
sangnoir
I'm guessing an algorithmic one, which invests based on "buzz" and/or price
trends leading to positive feedback cycles.

------
nextstep
>>>In both cases, this hack presents a strong argument for tending towards
sticking to things proven to work and to be wary of overcomplicating things
and thereby introducing unnecessary risks when people’s financial assets are
involved. Which, I suppose, means two points for team Bitcoin.

What? The same bitcoin that is betting everything on an untested and
incredibly complex second-layer protocol called "lightning network" instead of
just scaling with larger blocks?

~~~
osteele
> instead of just scaling with larger blocks

Blocks will need to be ~5TB to match Visa transaction volume. Some lightning
network advocates are concerned about the effect that a size increase
necessary to support even a fraction of this volume would have on mining
centralization.

Other concerns are listed here
[https://en.bitcoin.it/wiki/Block_size_limit_controversy](https://en.bitcoin.it/wiki/Block_size_limit_controversy)
and here
[https://en.bitcoin.it/wiki/Scalability_FAQ#General_Block_Siz...](https://en.bitcoin.it/wiki/Scalability_FAQ#General_Block_Size_Increase_Theory)

~~~
pitaj
> Blocks will need to be ~5TB to match Visa transaction volume.

No, it's on the order of 3 gigabytes. Visa claims they can handle up to 25k
Tx/s.

226 B / Tx. 600 sec (10 min) per block.

(25000 Tx / s)(226 B / Tx)(600 s / block) = 3.39 • 10^9 B / block

Visa normally handled around 2k Tx/s, which would only require a few hundred
megabytes per block.

~~~
democracy
Re Visa TPS:

*Source: 50,000 transactions per second, as of July 15, 2017

[[https://ripple.com/xrp/](https://ripple.com/xrp/)]

------
keyle
So basically this time warp hack is allowing eval() on the chain history...
What could go wrong?

------
EGreg
_This is because, in decentralized networks that obstinately refuse to grant
any special authority to third parties, accurately enforcing time
synchronization is no simple matter._

That depends on what you’re going for. The concept of time is actually a local
phenomenon, as relativity shows. To compare whether events A and B happened
first, you must first trace some path from A and B to a third party C (maybe C
= A or B if you really want) and then ask C which came first.

We had to deal with this when building our DLT. See this:
[https://intercoin.org/technology.pdf](https://intercoin.org/technology.pdf)

~~~
danbruc
Relativistic effects are irrelevant at the level of precision required for
running a system distributed across the surface of Earth, what you are
describing is just due to a lack of properly synchronized clocks. Place an
atomic clock at each node and there is no need for any C deciding the ordering
of A and B. Alternatively you may also be able to use logical clocks like a
Lamport clock or a vector clock to establish an ordering in a distributed
system in absence of properly synchronized clocks but that depends on the use
case.

~~~
EGreg
No. Relativity is actually a special case if this general principle.

For a large and complex network, events A and B which are very far apart
cannot be compared in any canonical way. The larger the network the less they
can be compared.

However, A can come before B _relative to_ some reference point C.

That’s enough to achieve most things. The point is that you _shouldn’t_ have a
global and continuous sense of time in distributed systems. Leslie Lamport has
developed many things including vector clock over the years to address this.

Our PTN is just the latest in a long line of examples.

When you say that you have “no need” for C once you have synchronized clocks,
not only are you ignoring relativity but you are also ignoring the byzantine
generals problem. If Han Solo can benefit from claiming he shot first, why
would I trust his clock? If you’re implementing a video game or cryptocurrency
in a byzantine resilient way, how exactly would you have a continuous global
sense of time? Go ahead the floor is yours.

~~~
danbruc
You are mixing two things up - figuring out the time an event happened and
trusting a timestamp that someone attached to an event. Within a single
inertial reference frame there is a canonical way to order events, namely by
the time coordinate in that reference frame which is the same everywhere. And
for all practical purposes of building a distributed system on Earth, Earth is
an internal reference frame. Ensuring that nobody cheats when assigning time
stamps to events is an entirely different problem. As said, hand an atomic
clock to everyone and you can know the order of any two events, relativity
plays no role. Making sure that nobody can or will cheat will require some
more effort, but again relativity plays no role.

~~~
EGreg
You didn’t explain how a distributed _byzantine_ system can synchronize a
global time that’s continuous.

I am saying that you won’t be able to — local time is continuous and you
should use that for comparisons.

It’s nowhere near as straightforward as you think to make global time work.
You will find that “making sure nobody can or will cheat” will require exactly
the kind of effort that will make the concept of a global timestamp
meaningless. Also, why do you need a global timestamp in the first place? Upon
further reflection you will probably realize that all your concepts of time
actually require exactly what I said: two paths to the same local reference
point, where a comparison is made.

Your ideas about global time are an illusion based on simplifying assumptions
that are not true.

~~~
danbruc
I only object your claim that relativity - as in special or general theory of
relativity - has any bearing on the design of distributed systems on Earth. In
particular, you can order events occurring at different places if you have
properly synchronized clocks. The fact that a common solution is to use
logical clocks like Lamport clocks is because it is easier to do so than to
maintain a network of precisely synchronized clocks. But that does not mean it
can not be done with synchronized clocks, see for example Google's Spanner [1]
database.

I never addressed what else is necessary to make this work in the presence of
Byzantine faults. But I am also pretty sure that just sending information
about events from A and B to C and asking C about the ordering is not
sufficient, after all C may be faulty and give a wrong answer, change its mind
after every query, or just not respond at all. Admittedly you did not provide
many details, so I may very well totally misunderstand what you had in mind.

But, just to repeat, I am not really addressing the requirements of a
Byzantine fault tolerant system, I am just saying that the special or general
theory of relativity is not relevant for distributed systems on Earth.

[1]
[https://en.wikipedia.org/wiki/Spanner_(database)](https://en.wikipedia.org/wiki/Spanner_\(database\))

~~~
EGreg
I think you misunderstood my point.

I said that relativity is a special case of this principle that time is a
local phenomenon.

~~~
danbruc
Which is not really true, time is frame depended, not location depended. Just
take the twin paradox, when they start and later meet again, they are in the
same place but different amounts of time have passed for each of them. The
closest thing I can think of is that the ordering of two events is not
ambiguous if they are timelike separated while for spacelike separated events
you can find different reference frames which disagree on the order of the
events.

~~~
EGreg
What you said doesn’t disprove what I said :)

~~~
danbruc
Given that we have never actually defined what »time is a local phenomenon« is
supposed to mean, that may or may not be true. But given that relativity puts
time and space on a more equal footing than Newtonian mechanics, where you now
can turn time into space and space into time, I remain pretty doubtful that
something that can be summarized as »time is a local phenomenon« is likely to
be interesting and correct because it seems to point in the direction of not
treating time and space in a unified manner.

~~~
EGreg
Listen, space is a local phenomenon too. It is just about correlations.

A lot of the nonlocal effects in De Broglie Bohm theory become classical once
you consider close-by objects and neighborhoods. They are just super
correlated.

Why? We don’t know. But with so many correlations, you can compare way more
easily between them.

[https://www.wired.com/2016/01/quantum-links-in-time-and-
spac...](https://www.wired.com/2016/01/quantum-links-in-time-and-space-may-
form-the-universes-foundation)

~~~
danbruc
This still does not answer what you mean with »time is a local phenomenon«.
What feature of time is local and in which sense? And I am even more skeptical
about »space is a local phenomenon«, I mean »local« means »close in space« and
therefore it seems you are saying something recursive, i.e. space has some
property within itself which doesn't really make sense, at least to me.

So unless you can define or at least explain in more detail what you mean with
»time (or space) is a local phenomenon«, we won't really get anywhere.

------
sarreph
Is it possible to add (Cryptocurrency) after ‘The Verge’ in the title please?
I thought it referred to the news outlet.

~~~
jameskegel
I was not aware that The Verge was still moving forward after Joshua Topolsky
left, and their show "On The Verge" was cancelled.

------
twog
PoW coins outside Bitcoin will never work. This will continue to happen over &
over again to every smaller PoW chain

~~~
EGreg
PoW is leader based consensus. It’s slow and can’t handle millions of
simultaneous payments. Why do new distributed ledger technologies still prefer
to use it, when there are other, far faster consensus algorithms out there?

~~~
jayd16
Probably because they're not trying to actually solve a problem. They're just
on the hype train.

More seriously, if the name of the game is efficiency then a distributed
ledger isn't the answer.

~~~
blattimwind
> More seriously, if the name of the game is efficiency then a distributed
> ledger isn't the answer.

And if you're only looking for immutability, then time-stamping is already
good enough (and available since the late 90s or so). I suppose you can use a
Merkle tree and then proceed to put "Blockchain" in your marketing materials
without technically lying.

