Hacker Newsnew | comments | show | ask | jobs | submit | lwat's comments login

Why

-----


If you're making transactions smaller than 1 cent, keep them off the blockchain. You're just wasting everyone's resources. Aggregate them until they're big enough to matter and THEN commit them to the blockchain.

-----


The limit will be adjusted if BTC becomes worth thousands.

-----


Yes, I see that they're adding command line flags. I don't really think BTC can be considered so stable at this point that a run up to $1000 and then a drop back down to the low hundreds within a day or two could be considered impossible. And if its value grows beyond even that the fluctuations may swing even wilder in terms of monetary value.

I'd also point out that in much of the world, the local equivalent of $0.01 USD may actually not be a trivially dismissable amount of money.

But I'm not sure it's a bad idea, I just think they chose a very awkward value to peg it at.

-----


No this is awesome. Only transaction smaller than about half a cent is blocked. There's no good reason to make a transaction that small on the blockchain. Satoshidice sends thousands of transactions of one satoshi each every day and it adds gigabytes of data to to millions of computers worldwide. What a waste of resources.

-----


No its not. When faced with a scalability problem, they decided to ban certain uses rather than fix the root cause. Bitcoin isn't going to be able to grow beyond a niche currency if Satoshi Dice's level of activity causes such large problems.

-----


I don't see any suggestions from your side?

-----


The parent didn't claim to be a proponent or supporter of bitcoin. He may not know of any improvements. He may not believe any fundamental ones are possible. That does not detract from the point.

-----


But true micro payments do need to be a lot smaller than that. It's kind of the idea if every web browser could load a dollar a week into their browser and have it evenly paid to each site they visit.

It doesn't seem like a lot of money but if everyone did it I'm guessing it would probably beat Adsense.

There are hundreds of other applications as well for micropayments.

-----


You can aggregate those and pay them out when they become big enough. Bitcoin stores every transaction on every computer on the Bitcoin network. It's not suited to transactions that small, you're just wasting everyone's resources.

-----


>here's no good reason to make a transaction that small on the blockchain.

You don't have the right to say that, it should be my right to decide whether my transaction size is appropriate or not. This sounds like regulation to me.

-----


No we do have the right, as these are our computer resources being wasted. This is not a protocol change of bitcoin - you can still make and include your tiny transactions. You'll just have to mine your own blocks if you want them on the chain. All this change does is give nodes the ability to set thresholds on which transactions are relayed and included on the blocks they mine.

-----


When your transaction size affects my bandwidth and my hard disk, it becomes my business. Essentially you are acting like a spammer: wasting a disproportional amount of other people's computational resources for your gain.

-----


But that's not his fault. That's Bitcoin's model's fault.

-----


Let's just have the model pay for the transactions, then?

-----


That's like saying I have no right to stop email spammers and that they should have the right to decide if their email volume is appropriate or not.

Making sub-cent transactions is a waste of MY resources because every transaction gets duplicated to everyone's copy of the blockchain. That's spam. If we don't stop this then the blockchain will become so unwieldy that it makes Bitcoin all but useless for everyone, and that's not good for anyone.

-----


The way I make sense of this is that you need fewer (slow) disk reads to get the same amount of data into RAM, so that might explain the speedup?

I agree that it sounds too good to be true though.

-----


Your read is correct. Once CPU time spent in decompression became less than disk wait time for the same data uncompressed, the reduced IO with compression started to win — sometimes massively. As powerful as processors are these days, results like these aren't impossible, or even terribly unlikely.

Consider the analogous (if simplified) case of logfile parsing, from my production syslog environment, with full query logging enabled:

  # ls -lrt
  ...
  -rw------- 1 root root  828096521 Apr 22 04:07 postgresql-query.log-20130421.gz
  -rw------- 1 root root 8817070769 Apr 22 04:09 postgresql-query.log-20130422
  # time zgrep -c duration postgresql-query.log-20130421.gz
  19130676

  real	0m43.818s
  user	0m44.060s
  sys	0m6.874s
  # time grep -c duration postgresql-query.log-20130422
  18634420

  real	4m7.008s
  user	0m9.826s
  sys	0m3.843s
EDIT: I'm not sure why time(1) is reporting more "user" time than "real" time in the compressed case.

-----


zgrep runs grep and gzip as two separate subprocesses, so if you have multiple CPUs then the entire job can accumulate more CPU time than wallclock time (so it's just showing you that you exploited some parallelism, with grep and gzip running simultaneously for part of the time).

-----


I had an original IBM PC XT (used) with a 10MB full height (2x today's 5.25") MFM hard drive.. it had about 3MB of available disk space and took I swear 6+ minutes to boot.

It actually ran faster double-spaced (stacker) and had nearly 12MB of available space... didn't have any problems with programs loading, surprisingly enough.. which became more of an issue when moving onto a 486.

Yeah, when your storage is so relatively slow, the CPU can run compression, you can get impressive gains in space and performance.

-----


You say efficient, we say awkward.

-----


Communication is about consensus. You can contend that you're right all day long, but so long as you put others off you'll be wrong.

-----


How is communication about consensus? You can put others off and still communicate "correctly".

-----


Protocols are about consensus, almost by definition. In computer protocols, we get the consensus before we start using the protocol. In social interactions, we're molding the protocol as we use it.

As for communicating "correctly", it's a matter of (mostly) definitions and circumstances whether putting people off and "correct" communication are consistent. You may have transmitted the correct information to someone's brain, but not annoying people is usually an important goal, sometimes even more important than that of transmitting the information. Maintaining someone's good opinion of you might outweigh the importance of whatever info you want to tell them.

-----


You can't communicate effectively without a protocol/language/etc, but communicating is not about consensus; it's about communicating ideas/thoughts/etc. Consensus is a part of communication, not its aim.

I agree that not offending/annoying someone is beneficial and might outweigh the message you have to communicate, but that isn't relevant to discussion about the efficiency of the protocol. If in spite of brusqueness, your point comes across, then it's effective communication.

-----


Think of the "consensus" as the embedded state within a communication.

In a feudal society a lord might send a written missive to the King or Queen, and if they did it would contain a ton of horribly polite boilerplate, because the consensus of the time on both sides was that anything less was disrespectful.

It is very possible that your lack of words in a given communication sends a point across that you never intended, even if the point you had in mind also made it across.

-----


Sure, but that doesn't speak to my point that consensus is a component of communication, not its primary aim.

-----


In computer protocols, we get the consensus before we start using the protocol.

VSRP analogue in computing: Unannounced, start omitting headers in response to HTTP requests. They are unnecessary baggage that gets in the way of the actual content of the message.

-----


Not quite. If you're using VSRE, the information contained in those headers is already implicit. I can't think of an example off the top of my head, but I'm sure there are protocols where you don't need to send loquacious headers or equivalent with every response. "VSRE" says, "go ahead and assume I know what the headers would be and skip them". If you're not expecting headers, it's not a problem if they're missing.

-----


Better analog, start sending

X-ACCEPT-NO-HEADERS: true

or something in your request, which allows the responding party to omit headers in the reply (or not).

-----


Indeed.

Think about that carefully.

-----


Awkward would be to just go ahead and start replying in one word without trying to establish a netiquette protocol.

-----


You say awkward, I say me and my efficient friends will have faster communication, tighter feedback loop and outcompete you in everything. :p.

-----


Your reply is way to short to not be awkward. It should be expanded to at least two paragraphs.

-----


At least the proposed solution is better than simply replying "ACK" or "NAK"

-----


L Ron Hubbard

-----


Now that someone's identified these coins people will watch them very closely. It's gonna be very difficult for Satoshi to ever get their millions out of the system.

-----


Exactly. Anyone who gets a payment from Satoshi with any identifying info (including "meet at midnight on a bridge"), would be tempted to sell it to gangsters for at least $10M.

-----


I will happily sell ídentifying info about a few dozen people with a lot more net worth than satoshi....

http://www.forbes.com/billionaires/list/

-----


You misread, it's not 63% of the market. Its about 10% of the market - 1148800 BTC

-----


I was looking at this part: >Note that from the 1814400 BTC awarded, 1148800 BTC has never been spent (63%). I suppose (but have not checked it yet) that these are exactly the segments that belong to the mystery entity

So I guess I'm confused the market bigger than the 1,814,400 that have been awarded somehow? I'm not that familiar with bitcoins so that's very possible, but the author seems to imply that Nakamoto has 63%.

-----


The chart stops at 2010, many more bitcoins have been mined since.

-----


Ah that makes sense, thanks. In that case I wonder how many more BTC Nakamoto may have mined since then.

-----


Anywhere but the US

-----


or china for that matter

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: