Hacker News new | comments | apply | show | ask | jobs | submit | morsch's comments login

Seems like a hint that we should design our means of communication to resist single points of failure.

That's only one of many aspects of the whole block size debate.

And I'm not sure there's a lot of merit to this specific aspect. From what I can tell[0], the "computational limit" is primarily in increased bandwidth and storage requirements in running a full node (2.8 GB per day for a block size of 20 MB).

While that's not trivial, it hardly limits adoption to institutions like banks. 2.8 GB is about 45 minutes of 1080p video, which people seem to manage even on domestic connections. At the same time, even with the current block size, very few people (less than 1%?) currently run a full node; so maybe it's not the "computational limits" stopping people?

[0] A better, if outdated, list of pros and cons: http://bitcoin.stackexchange.com/questions/36085/what-are-th...

Well the 2.8gb is to download all confirmed blocks, say a copy of the blocks added 10 days ago.

But the actual bandwidth exceeds that significantly. Both transactions that never make it into actual blocks (like bad transactions or double spends) and blocks that are orphaned are part of the bandwidth too.

More importantly, even if you were just sending and receiving the transactions that end up in the final longest blockchain, you're communicating hopefully not with 1 other node, as that'd mostly defeat the purpose, but rather tens of nodes, meaning you're perhaps seeding yourself the new blockchain data to tens of other nodes.

In fact, if you look at nodes today, the average daily bandwidth is pretty much the 2.8gb you mentioned for blocks 20x the current size. e.g.

So if we'd go 20x, you're talking about 50gb a day, or 1500gb a month. And that just doesn't fly with most consumer ISPs. It's something we could certainly move towards in 2020 or 2025 though.

Now you can limit your node of course, but that limits the utility, too few connections means you're just receiving data and not passing through actually hurting the health of the network. This node transmits only about 20% more than it receives, so it's not some kind of super node that delivers data to lots of others. There are nodes like that which today rack up 800gb of monthly data already, this one is closer to 100gb per month, which already is a lot just 'on the side', in countries with shitty bandwidth caps. (US) Plus, a lot of the traffic is burst traffic, so it's not like we're talking about 3gb daily = 35kb/s (which sounds fine) 24/7. For example, he's had days reaching 30gb of traffic, and again this isn't some industrial node. You have to be able to deal with that as well. If you cut your node when the going get's though on days like these, then it defeats the point and means the blocks are too big to handle.

So I think we can go a lot bigger with nodes, but we shouldn't downplay the traffic increases too much, 20mb blocks for example aren't trivial and can't just be compared to some 1080p video (although I make similar comparisons from time to time). At some point, you have to give away autonomy if you increase blocks too much. We already see with 1mb blocks today that lots of peeps run nodes on a VPS, so they can offload the hassle of a home setup. But this means lots of nodes already are being run by commercial providers, which are susceptible to government influence. Now I'm in no way anti-government, but the whole point of bitcoin is decentralisation, gateless, disintermediated, resistant to power etc... removing this leaves you a database that can be manipulated by those with power, just like an ordinary financial system we already have.

Tricky debate this!

edit: just to add for others who are interested... If the blockchain has value, people will use it, and keeping blocks small just forces transactions off-chain. And these can themselves be controlled by various gatekeepers and intermediaries. So on both sides you're losing out. A balance is necessary, everyone agrees there. Where the balance is, they don't. And it's made difficult by the need to 'get it right' right away, as changing the protocol in 5 years becomes way harder. Every day that goes by, more actors move in, making change a political process. So simply making blocks 2mb and saying 'let's get back to it in two years when blocks are getting full again' could actually be worse than delaying it shortly now and forcing a good permanent solution. While at the same time, every single day, bitcoin is treading uncharted waters, none of this has been done before in this particular context, so the foresight for a long-term solution is incredibly hard and already incredibly political.

Thanks for the link, seems like the post I was going by was underestimating the traffic quite a bit (though to be fair, they allude to it and it was probably my reading at blame).

That said, I stand by my point that the traffic ("computational") requirements hardly limit running a node to large organisations: depending on where you live, a domestic connection might not do it -- though it very well might! -- but a cheap server hosted somewhere on the "right" side of the last mile will.

> a cheap server hosted somewhere on the "right" side of the last mile will.

I think the idea is that if a large majority of the network is on linode, then shutting down Linode would cause the majority of whoever is left being able to dictate what's canon?

Why is 2MB the perfect size? Will we run into the same problems in three years? Or thirty?

Growth has continued, net real wages haven't[0]. Although this seems to be changing[1].

[0] http://www.diw.de/sixcms/detail.php?id=diw_01.c.342374.de

[1] http://www.dw.com/en/germans-enjoy-highest-real-wage-rise-in...

Also: "The FBI also reportedly sent 48,642 national security letters in 2015."

Maybe the color is supposed to reflect the result? It's blue now; if the polls were reversed it'd be red. It'd be much less confusing if they dropped the two dog icons and their labels.


They've got two scores for Telegram on their score card[0]: "Telegram" scores poorly (4 out of 7), "Telegram (secret chats)" scores perfectly (7 out of 7). The quality of the encryption algorithm itself doesn't factor into it, though Telegram gets a check for having a recent audit.

Even if the crypto was good, the cognitive load of having to decide whether you want a chat to be secret or not makes it a bad choice IMO, especially if it comes with a downside.

[0] https://www.eff.org/secure-messaging-scorecard


7/7 being a score that no practicing crypto engineer would be likely to come up with for that application.


Works like a charm. I'd love to see a more technical description, all the announcements are rather light on the details.


No, that's not what embrace-extend-extinguish is about. The worry about EEE is that they establish dominance through vertical integration, introduce incompatibilities through both incompetence (bugs) and malicious behaviour (features), which will weaken and destroy the free standard implementations.

I'm not worried though. This is a neat hack, and may be useful for some people who for whatever personal reason won't switch to Linux proper, but it will not gain anything like the dominance required to push through incompatibilities. Unix applications already deal with a heterogenous environment, to say the least, and Winux will just be one more participant; not a particularly important one at that.


*nix servers now handle 99% of the web. Microsoft isn't going to push through breaking standards.


Per volume of the end product, coffee is worse, black tea is better.

Coffee: ~1100 liters of water per liter of coffee.

Black tea: ~270 liters of water per liter of tea.

Source: The water footprint of coffee and tea consumption in the Netherlands, http://waterfootprint.org/media/downloads/ChapagainHoekstra2...


The higher mg caffeine / mL of coffee mitigates that a bit, probably so it's 2x as bad instead of 4x as bad.


Really? I don't drink coffee, but do drink tea, and I'm pretty sure that people drink them for reasons other than the raw caffeine content per ml.


Yea, but a lot of people use it simply as a drug delivery vehicle. Getting n mg caffeine into your system is the goal for some folks.


Another nice comparison:

Milk: ~1000 liters of water per liter

Chocolate: ~17000 liters of water per kg

Beef: ~15000 liters of water per kg

Sheep Meat: ~10000 liters of water per kg

Pork: ~6000 liters of water per kg

Butter: ~5500 liters of water per kg

Chicken meat: ~4500 liters of water per kg

Wine: ~400 liters of water per liter

Beer: ~300 liters of water per liter

source: http://www.imeche.org/policy-and-press/reports/detail/global...

(pdf report on the right)


as if I needed more reasons to stick with beer


Well this is highly misleading. Products that are measured VERY differently are grouped together.


Just like the statistic of sweet beverages. Of course that beers that include fish stuff require more or less water.

Of course that wine produced in dry climate requires a lot of water.

Producing a cow, chicken or pig will take a lot of water, probably not much of a difference if animal is located at northern/southern parts or in some shed at equatorial region.

I don't understand what is misleading. Data of water pollution should be of more concern than how much water something needs to thrive.

This data shouldn't influence your decision of what to consume. Data of pollution should.

If you're worried some categories are incorrect then at least you have a lower bound there. Add the water footprint of food that cow or pig eats and then you'll get more accurate. It's no-brainer that raising 60 billion land animals yearly takes a lot of water but it's a silly statistic. The pollution of water that the process creates is more important and a much more relevant statistic.


Has anybody tried to store a project's metadata (issues and PRs) within the project's git repository itself? Seems like a logical step.

If you don't want special git semantics around it, you'd have to be clever about how you store it so you don't have conflicts within the metadata. I.e. a naive design just adding a markdown file per issue or for all issues will require manual merging all the time. Still, it seems doable.


> Has anybody tried to store a project's metadata (issues and PRs) within the project's git repository itself? Seems like a logical step.

There were a number of tentative distributed bug trackers a few years ago, they sucked and fizzled out. IIRC Fossil is an attempt at an entire distributed project management system, I don't know how it fares.

> I.e. a naive design just adding a markdown file per issue or for all issues will require manual merging all the time. Still, it seems doable.

That only works for your own personal project where you're the only user and contributor. Bug reporting by editing a markdown file (or even something actually usable like an org-mode file or an sqlite or BDB file) isn't going to scale very high and is way more effort than most bug reporters (even technical ones) will be willing to put in.

And even if your users were willing to subject themselves to that, they still need a way to send back those contributions somehow.


git-appraise (https://github.com/google/git-appraise) does this for reviews and there is a tool (https://github.com/google/git-pull-request-mirror) for importing pull requests into the git-appraise format.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact