Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1.9 MByte in size / 6985 transactions.

This is about twice as much as is possible in a block of the old Bitcoin chain.

This will become super interesting.



Yeah, if the fees are lower and the network is less congested I wonder if legacy Bitcoin could get overtaken for small transactions?


If so, that would be a historic victory of pragmatism.

A lot of popular engineers in powerful positions have been working for years on how to scale Bitcoin.

Now some people said "Fuck that! We will just increase this integer from 1 to 8".

Reminds me of how Linus Torvalds created a simple monolithic Kernel while GNU&Co were working on sophisticated microkernels. Now, 26 years later, everything from servers to smartphones runs on Linux. And GNU Herd is still considered experimental.


And here I thought Linux won out because BSD was wrapped up in a lawsuit.

http://thevarguy.com/open-source-application-software-compan...


I've heard that's one of the big reasons Linux succeeded over 386BSD (?), but GP is right that GNU could have just made their own OS a few times over in the time Linux has been around and they've barely achieved anything other than starting over at least once with a new kernel.


GNU has made an OS, and it is frequently used in conjunction with Linux. HURD on the other hand according to Stallman is of lower priority because there is already a major copyleft kernel and so the pressure of necessity is much lower, leading developers to want to spend more time on other projects.


GNU is not an operating system, but a collection of mostly commandline tools.

If you look at Linux distributions, GNU ist just a relatively small part if you look at lines of code. And of that most is GCC and GDB:

http://pedrocr.pt/text/how-much-gnu-in-gnu-linux/


GNU is often used with Linux in order to comprise a full OS as defined by POSIX. It so happens that GNU can also be used by itself to do the same thing (by using GNU HURD and GNU MACH).

>GNU is not an operating system

Could you please elaborate on this position?

>but a collection of mostly commandline tools.

"Mostly" being the keyword, it also contains multiple implementations of different programming languages as well as their standard libraries, a kernel as well as a package manager. Also in the commandline tools it includes most (if not all) of the tools that are required by POSIX.

> If you look at Linux distributions, GNU ist just a relatively small part if you look at lines of code

I am unsure how this is relevant. In fact, I would argue that this is to be expected, especially when considering that the link checked everything in the main repository.


That's like saying busybox is an operating system, because it helps Linux to comprise a full OS as defined by POSIX

> Could you please elaborate on this position?

GNU is not an operating system, but some small projects that are used as minor components of Linux distributions

> GNU HURD and GNU MACH

Nobody uses them. They are as relevant as the Windows Services for UNIX


> That's like saying busybox is an operating system

I think that I will have to disagree with that. Instead I would argue that it is actually like saying that Busybox + Linux is an operating system as defined by POSIX or like saying that if Busybox had its own kernel it would be its own operating system as defined by POSIX.

> GNU is not an operating system, but some small projects that are used as minor components of Linux distributions

In my previous post I mentioned some of the components that allow GNU to be used as a modern and POSIX compatible OS. It just so happens that most Bash distributions instead of using GNU Mach use Linux as their kernel.

> as minor components of Linux distributions

I think that calling them as "minor components" is quite a bold claim.

> Nobody uses them

Does not change the fact that they are both part of the GNU project. Nor does it change the fact that one can use them in order to run the GNU operating system without 3rd party software.


Pretty sure busybox doesn't implement enough to be considered POSIX.

Hell, neither does GNU/Linux for that matter (it's not technically POSIX) but busybox is really way too minimalistic to be considered an OS.


Ultimately, the legal drama did not undercut programmers' ability to use or redistribute BSD. However, it did stunt adoption of the operating system by creating doubts about BSD's legal future.

In the time frame they are talking about, 1993-1994, almost nobody was using Linux commercially (except tjhe distros selling it). Even after that, Linux was fairly immature. If BSD was going to be a real contender, it still had a sizable lead. That it didn't win in the end leads me to believe other factors were more important, such as the license (which the article you linked also notes).

The BSD license is more permissive and appealing to a lot of organizations, but the requirement of the GPL to give back ultimately lead to a virtuous cycle where companies moved portions of their code in-kernel, because out of kernel is more problematic if it's also not proprietary.


Except nobody was "investing" (speculating) on GNU Herd. It should give a lot of people pause that Bitcoin can just suddenly split like this. What's to stop it from being split again and again?


It's not so sudden, it's been a continuous debate for years. It's likely that there will be future splits as well. The split is only successful if enough people start using it. It involves a lot of trust, but no more than one has had to have from the beginning of bitcoin.

What should give us pause about that?


The split itself is not damaging. It's like if 100 bars of your gold were suddenly 100 bars of gold and 100 bars of gold cash. The original gold marketplace is only affected as much as it is replaced by the gold cash marketplace. So it doesn't matter how many times the coin gets split in this fashion


While This analgy works only to some limited degree - gold has a value in itself, you can use it to build some jewels, an electric conductor or a door stopper among other things. A Bitcoin has no inherent value. If nobody is interested in Bitcoin (or Bitcoin cash) anymore it's just a few bits in a disk which can be overwritten.


simple. The cost of pulling it off.

Even if I had a million dollars in funding, that would buy me mining power for oh so long. All the meanwhile, If I change the rules too much no one would even attempt to join my fork.

Even If I had another million dollars to pay developers to build services on my new fork, there'd be no users and no economic activity. The fork would die.

There SHOULD be a lot of hard forks. Why? Because they're voluntary and most will fail without doing harm (they require consensus to matter). The good ones will survive on merit and act as protocol upgrades.


With the same success you could have forked bitcoin with the same block size, or just minor increase.

BCC is currently not congested only due to small number of transactions happening there.


No, Litecoin already exists.


The killer feature is having the same UTXO history as legacy Bitcoin. That is a genius way of on-boarding a huge array of users without having to worry about a premine/crowdsale considering those are ridiculously overcrowded at the moment.


The other genius part was doing this today, when the small-blockers and segwit proponents already had marketed this day for their supposed "USAF" - the anti-miner consensus-by-amount-of-nodes change.

Had this fork been done months before, not many would have noticed or cared. Well played.


Unfortunately the following 3 blocks were tiny in comparison, whether due to a malicious miner or lack of transactions is unclear.


Lack of transactions. The mempool is almost empty: https://jochen-hoenicke.de/queue/uahf/#2h


I don't think increasing the block size is an obvious win. I've seen others say it won't in fact do much to increase scalability, and there are also these arguments against it:

https://www.reddit.com/r/Bitcoin/comments/5p9iv8/arguments_a...

https://en.bitcoin.it/wiki/Block_size_limit_controversy


It's been an obvious win for a long time. If Chinese miners want to delay propagation, they can just do that.

The other complaints about expensive nodes are unfounded. Just downloading existing blocks is still very feasible at 8MB even with data caps.

This could have been prevented by going to 2 or 4 megs years back, avoiding the stupid arguments and providing real solid experimental data in the process. It'd have alleviated some of the congestion, too.


But segwit did go to 4MB (worst case), except done in a backwards compatible way, and removing the current unfortunate bias that creating new outputs is cheaper for transactions than consuming old outputs, though obviously the latter is better for the network as a whole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: