
Segwit2x Bugs Explained - jwildeboer
https://bitcointechtalk.com/segwit2x-bugs-explained-8e0c286124bc
======
danra
It's impossible to explain the Segwit2x bugs ignoring the social aspect.

The planned fork was supported by only part of the community, which would not
have been a problem if it weren't for the lack of replay protection, which
would have lost many people money had the fork been activated. The rationale
for not having replay protection was to take over the Bitcoin network as a
whole, with the lack of differentiation from the 'legacy' Bitcoin network
being seen as a feature. In other words, rich people with a lot of mining
power were planning on forcing the network into accepting their own rules
(with most people in the network, running SPV wallets, doing so unknowingly),
and if people lose money in the process, well, that's just too bad.

And that's just what touched me the most personally. There were a lot of other
things in the Segwit2x process to be appalled with, such as the anti-developer
culture, the infamous NYA closed meeting and general lack of transparency, and
more.

Because I was disgusted with that bully approach, I did not invest any serious
time reviewing the Segwit2x code changes, but instead did spend time studying,
reviewing and contributing (my very small bit) to the Bitcoin Core code. I
guess that other developers similarly had no drive to contribute to the
Segwit2x codebase. I also think it's probable some people have found the bug/s
and did not report them so as not to help bullies get their way.

In conclusion, the lack of review and testing, leading to the bugs, is not
just a technical issue. Doing serious review and testing of open source code
relies on support from the community, which was minimal, because very few
competent developers in the space who understood what was going on wanted to
help.

~~~
ploxiln
Segwit2x was only "anti-developer" if you only consider the "bitcoin core"
developers. There were many developers quite interested in larger block size
for many years, since before the February 2016 "Bitcoin Roundtable" in Hong
Kong. Segwit2x had over 90% of the hashpower of the bitcoin world voting for
it, at one point a couple months before it was to activate. Hashpower is
heavily tilted towards large mining pools, many of them in China, but it is
still the best measure of consensus and support the network has. That's a
major tenet of Bitcoin in the first place. (Though I wouldn't say "only miners
matter" nor "only users matter", the network really needs both to work.)

I think that the survival and modest success of Bitcoin Cash eroded the
initial overwhelming support for Segwit2X. Though if you mostly read
/r/bitcoin I'd say you have a very warped view of what happened ;)

~~~
PierreRochard
> There were many developers quite interested in larger block size for many
> years

Yet, they weren't around when it came time to make it happen.

> Segwit2x had over 90% of the hashpower of the bitcoin world voting for it

Yet, it had a deeply flawed software development process.

> it is still the best measure of consensus and support the network has

Yet the miners stated that they would only mine the fork for 12 hours, do you
know why? Because they would have been mining at a (relative) loss. Really
undermines this measure of consensus. Maybe the best measure is the market
price of the coin.

> I think that the survival and modest success of Bitcoin Cash eroded the
> initial overwhelming support for Segwit2X.

Basically, Bitcoin Cash is what Segwit2x should've been: an honest, replay-
protected coin for people who think a money's value comes from its base layer
payment system's transactions per second.

~~~
ploxiln
> Yet, they weren't around when it came time to make it happen.

Bitcoin XT had a mechanism for larger blocks in mid 2015. There were multiple
BIPs and implementations for larger blocks besides that in 2016. It didn't
matter much because of the stickyness of the original/default bitcoin
implementation which is controlled by developers employed by blockstream,
which was always intended to profit off of "sidechains" enabled by segwit and
necessitated by small block sizes. (For anyone less familiar, keep in mind the
first versions of the original implementation were written by Satoshi and then
maintained by Gavin Andresen, one of the developers now on the "outside".)

Segwit2X was a compromise, arrived at through much effort at the Hong Kong
meeting in early 2016 and again at the New York meeting in early 2017. It does
appear that any compromise was futile and hopeless.

~~~
chabes
What developers were included in the NYA? Wasn't it just a bunch of suits?
Like, only businesses and miners?

------
zkomp
"Essentially, even one or two weak reviews in a chain of reviews can break the
entire consensus system with a catastrophic bug. Hopefully, this can be an
object lesson in making sure critical changes are reviewed very thoroughly.
Stay safe and go thank the developers that do the hard work of not just
coding, but reviewing."

Amen to that. Doing a proper review for any software is so hard, so _not_ fun
and often misunderstood and unappreciated (by management).

And then when shtf you also get the blame for your "weak review".

~~~
azernik
Also, _please_ , keep your PRs small and fixed in scope. This particular
change snuck in on a large PR, to which it was added only fairly late on the
review process.

~~~
jacquesm
There is a very simple solution to that: large pull requests -> automatic
fail.

One project I worked on some guy presented me with his masterpiece, a
ridiculously large refactoring of the entire code base and clean-up of old
code that was no longer used. He was really pissed off when I categorically
rejected the request and asked him to break it up into manageable pieces.

Quite sad because obviously he put a lot of effort into it but he did not keep
in mind that it's one thing to do this, it is an impossibility to actually
review it.

When coding keep in mind the job of the people coming after you: review, QA,
eventual long term maintainers and so on.

~~~
timthelion
How are you supposed to do a real refactor then?

~~~
pacaro
When a refactor cannot be done in an iterative fashion — there are some
changes that are just too fundamental to be done in small pieces — then one
approach is to write a review guide, document what has changed and why,
indicate to reviewers where the pain points are.

One argument against gradual refractors is that they are often left half done
when enthusiasm runs out. I’ve seem mloc sized code bases with multiple layers
of half done refactors and migrations.

~~~
mannykannot
That might be an argument against gradual refactors, but it is not an argument
for doing large refactors in one pass, as it does not solve the underlying
problem of running out of enthusiasm (or time, for that matter).

~~~
pacaro
Agree for sure. Especially management enthusiasm

------
delta1
For a change of this magnitude you would expect some tests that actually
verify these assumptions? Or am I missing something in this PR? [1]

[1]
[https://github.com/btc1/bitcoin/pull/11/files](https://github.com/btc1/bitcoin/pull/11/files)

------
brndnmtthws
The 2x fork never really had any legitimacy. The New York agreement was
supposedly signed by only a small number of people, which is not the way
Bitcoin is governed. The resistance from the community and subsequent fallout
shows the resilience of Bitcoin itself, and how well it resists manipulation
and FUD.

~~~
stale2002
Well the biggest snag in the plan was that the hardcore big blockers had
already moved over to Bitcoin Cash, so we lost our biggest support group early
on.

Now, after segwit2X has failed and proved that it isn't likely to succeed any
time in the next X years, well us segwit2Xer moderates have joined the bitcoin
cash people (as well as some have moved to ethereum).

That community is much stronger and we are able to survive because it was a
hardfork.

~~~
fourstar
Why not go to LTC? (Seems) like a friendlier community.

~~~
stale2002
Lite coin?

Lite coin has zero community of note. It is mostly just speculators, and
people who suck up to the bitcoin Core team.

There frankly isn't even a litecoin vision. Like what do litecoin supporters
even believe? What even IS a litecoin supporter?

I go to a bunch in person crytocurrencies and talk to people. And I can't
think of a single time someone has said "oh, I am really excited about LTC" or
similar.

Nobody is going out to coffee shops and saying "hey, do you accept litecoin?".
Nobody is hosting litecoin meetups or tech talks or anything at all.

My biggest problem with litecoin, though, is that it is an unthreatening, PC,
coin that has no intention of shaking the status quo, pushing the needle, or
doing anything groundbreaking that might upset the powers that be in the cryto
space.

If you aren't making anyone mad, then you probably aren't doing anything
interesting or important. And I can tell you that nobody is 'mad' at litecoin.

------
milansuk
They should start using valgrind! I've just launched few bin files and got a
lot of stuff like:

"Use of uninitialised value of size 8"

"Conditional jump or move depends on uninitialised value(s)"

"Syscall param writev(vector[...]) points to uninitialised byte(s)"

Also It doesn't free some memory. Valgrind is not silver bullet, but it helps
alot. Bitcoin Market capitalization is around $140 billions and these kind of
bugs should not be there.

~~~
pikchurn
Bitcoin developers do use valgrind. I don't know about B2X, but the Bitcoin
Core guys do.

------
gruez
Why wasn't this caught in testnet? Did they not do any tests?

~~~
xorcist
Their testnet didn't fork properly either, but for some other reason I
believe. It was thought to be because they had more mining power than
anticipated in their testnet, which was discussed on the mailing list as an
attack on testing.

One reason why testing wasn't thorough enough was that it was under specified.
There was no public discussion on how this was supposed to work. The
specification was whatever the maintainer decided to merge, which changed
several times during the software's lifetime.

Most further discussion or counter proposals, including perhaps the most
controversial one which was replay protection, were met with "that wasn't what
the signees agreed to" and that anything going beyond that was off the table.
But what the signees agreed to was always very unclear, as the agreement only
described that a "fork" was to be activated and that something was supposed to
be 2 MB in this fork. What was supposed to be 2 MB was not specified, let
alone how deployment was supposed to work, and neither the protocol for
signalling, fork activation and block weight. And the latter things are
probably what reviewers would have concerned themselves with had this ever
been a real proposal.

~~~
stale2002
What "2MB was supposed to be" was absolutely obvious to everyone who actually
signed the document.

It was supposed to be twice as much as segwitz done via a base blocksize
hardfork (thus, the name segwit2X).

The only people who were "confused" about what it was were the smaller block
trolls who didn't even sign or support the document in the first place,
therefore their opinion doesn't matter. The only opinions that mattered were
of those who actually supported the NYA and segwit2X.

Some other details, such as replay protection, we're obvious sabotage efforts
that were also proposed by the small blocker trolls. None of the NYA people
were arguing for that, and once again, they were the only opinions that
mattered.

The other details, such as deployment date, signaling method, ect were not
specified, though, and I agree that this came back to bite everyone in the
ass, as proven by the fact that it failed.

~~~
xorcist
No, far from it, as evidenced by the opinions of the undersigners on where to
take the project from the start. And while labelling dissenting opinions
trolls may make for convenient discourse, it does not help with understanding
the underlying engineering.

Note that the most common reason among the former undersigners for withdrawing
support was the lack of replay protection (which only underscores the lack of
consensus around some fundamental design decisions among the signers).

To anyone familiar with post segregated witness Bitcoin an increase in base
block size could be done in a number of ways. The weights for data structures
could be changed, but so could the weight limit. What would be reasonable if
backwards compatibility is no concern is use the same multiplier for all types
of signatures, thereby removing the so called segwit discount, but this turned
out not to be the design chosen.

Like most people interested in Bitcoin, I only followed the NYA at a distance
and have no more information than anyone else in this space, but I do know
some people working with a company that took part of the agreement. I do not
think there was some secret cabal behind the agreement, which some people
seems to believe, but I also think that there were conflicting goals involved.

------
emmelaich
> Only one person seems to have approved the changes (opetruzel)

... and that's opetruzel's (almost) only contribution to anything in github in
his/her two year history.

Isn't that odd?

~~~
CydeWeys
Wow, good catch. There are plenty of malicious attackers in the cryptocurrency
space, so you should definitely take reviews from not-yet-established
contributors with a huge grain of salt. A review of this magnitude should have
required review from several well-known contributors.

We use a better pattern at work. On our project, we require approval from
members within the team for a commit to go in. If there's one person with the
best experience in any given area, or even just a strong opinion, then their
approval is required as well. It's a big anti-pattern that I catch myself
thinking about occasionally. It's temping to send out a commit for review by
more junior/inexperienced code members, bypassing the more experienced
reviewer who you know will have lots of input you might not necessarily want.
That's what Jeff Garzik did here; by the end of it he didn't actually want a
review, just a rubber stamp, and so any stamp would do. Thus the entire point
of doing reviews was bypassed. It's particularly egregious that he never even
bothered to write tests as requested by another reviewer.

------
jwildeboer
From the article (and my main reason for posting):

“Reviewing and testing consensus changes is really, really hard. [...]
Essentially, even one or two weak reviews in a chain of reviews can break the
entire consensus system with a catastrophic bug.”

~~~
jwildeboer
Amusing. My post makes it to the top, but my comment explaining why I posted
gets downvoted. :)

~~~
Dylan16807
Not surprising! You didn't elaborate on it, you just quoted it. People already
read the article because you posted it. Rereading your favorite sentence isn't
particularly helpful at that point.

------
davedx
Code review is hard to do right.

Use multiple reviewers for critical code (treat it like proof reading an email
you are about to send to millions of customers).

Write automated tests if you can, including for regressions and edge cases,
but also to assert your code works the way you’re saying it will.

Finally, check the code out locally if you’re a reviewer and verify the
changes yourself, don’t just read the diffs.

------
altoz
If you want to hear this article read out loud, this podcast has this article
along with some commentary:

[https://soundcloud.com/noded-bitcoin-
podcast/noded-020](https://soundcloud.com/noded-bitcoin-podcast/noded-020)

------
zbentley
Fascinating. I am not super familiar with the verification of consensus
systems like this one, but have done some work testing resilience and
consistency guarantees of distributed data stores. Would something like Jepsen
[1] be useful here? Could it be adapted to verify contracts like those of BTC
with similar modifications to adapting it to verify, say, a new distributed
database?

[1] [https://github.com/jepsen-io/jepsen](https://github.com/jepsen-io/jepsen)

------
ineedasername
And this will be cited for years as a case in point for why finanvial
institutions should be skeptical of treating crypto currencies as a true piece
of payment or currency infrastructure.

At the current level of use, this issue is a side note. At the envisioned
level of use by crypto promoters, it woukd be a catastrophic economy-crypaling
disaster that would have impacts far beyond a single institution. As
infrastructure, this sort of thing is must-work cannot-fail technology on the
order of nuclear reactors.

I'm not saying crypto currencies aren't in our future at that scale. I don't
know if they are or not. I'm saying we have a much, much longer to go, and the
order of a decade, before they have time to mature and then prove that
maturity. Until then, they're a speculator side show. Heck, with so much
mining capacity in China, they exist in significant part on the suffrance of
the Chinese government not feeling too threating by the level of social
disruption they might cause. If that threshold is crossed, significant mining
capacity an transaction speed that goes with it is a few great-firewall
configs away from oblivion. Yeah, miming difficulty will kick in ti alleviate
pressure, but only after disaster-level and life-savings-wipeout levels of
financial fallout ensue.

~~~
jstanley
This is a bug in a fork of bitcoin.

It shows how bad things can be if you don't use development practices that are
appropriate for the project, but it doesn't show anything at all about how
_actual_ bitcoin code is developed. This is about a fork (that never ended up
happening).

~~~
Dylan16807
_actual_ bitcoin is based on network consensus. And Segwit2x had 80-95%
consensus.

~~~
jstanley
No it didn't. It had 80-95% consensus _among miners_ , maybe, but that's not
the same.

~~~
Dylan16807
There's a little room for difference, but if something stays at 95% of miner
support then everyone else is almost certainly going to be dragged along for
the ride.

~~~
jstanley
I think the opposite is true: if 95% of the community wants to use something,
miners will be dragged along as mining anything else would be unprofitable.

~~~
biafra
There is no way to determine what 95% of the community want for the future of
Bitcoin. There is no community voting in Bitcoin. Only miner signalling.
People can stop using Bitcoin after a change but there is no way to get their
opinion in advance.

~~~
exit
> There is no community voting in Bitcoin. Only miner signalling.

no, there is the open market. 2x futures traded well below btc's value.

------
Lerc
There's a lot of money being put into startups for bitcoin based businesses.
Perhaps this may incentivise some of the larger ones to hire a few staff to do
reviews. It would be embarrassing to have their investment go up in a puff of
smoke.

------
m3kw9
When you see one cockroach there is probably more

------
timthelion
"The pull request has 221 comments, most of which are arguing over the
definition of 2MB blocks. "

For me, as a young adult, the most important part of growing up has been
learning to not engage in bike shedding. It is hard not to bike shed. When
design questions come up, and I see one thing as being WRONG, it is _really_
hard not to become myopic about it. But this is a great example of how bike
shedding can be harmful.

~~~
zilchers
It obviously depends on who you ask, this is an incredibly politicized issue,
but core developers would argue blocksize is not bike shedding, but rather an
important defense against centralization and censorship. I think the stat I
heard is that at 1mb, and with some interesting distribution topology, the
current speed of block propagation across the majority of the network is
250ms. So, this is obviously where things get tricky, one person’s bike
shedding is another’s critical requirement.

~~~
bufferoverflow
And where did this 250ms come from? Seems arbitrary.

~~~
zilchers
I believe it came from this talk:

[https://www.youtube.com/watch?v=nSRoEeqYtJA](https://www.youtube.com/watch?v=nSRoEeqYtJA)

And I think this is what Greg's referring to:

[http://bitcoinfibre.org/](http://bitcoinfibre.org/)

~~~
smokeyj
So all scaling progress is stopped because of a youtube video? Yikes.

There should be a law of bike shedding, where you don't realize you're bike
shedding because _omg it 's the most important thing in the world_.

------
knowThySelfx
Time for Hashgraph: [https://hashgraph.com/](https://hashgraph.com/)

