If you're making transactions smaller than 1 cent, keep them off the blockchain. You're just wasting everyone's resources. Aggregate them until they're big enough to matter and THEN commit them to the blockchain.
Yes, I see that they're adding command line flags. I don't really think BTC can be considered so stable at this point that a run up to $1000 and then a drop back down to the low hundreds within a day or two could be considered impossible. And if its value grows beyond even that the fluctuations may swing even wilder in terms of monetary value.
I'd also point out that in much of the world, the local equivalent of $0.01 USD may actually not be a trivially dismissable amount of money.
But I'm not sure it's a bad idea, I just think they chose a very awkward value to peg it at.
No this is awesome. Only transaction smaller than about half a cent is blocked. There's no good reason to make a transaction that small on the blockchain. Satoshidice sends thousands of transactions of one satoshi each every day and it adds gigabytes of data to to millions of computers worldwide. What a waste of resources.
No its not. When faced with a scalability problem, they decided to ban certain uses rather than fix the root cause. Bitcoin isn't going to be able to grow beyond a niche currency if Satoshi Dice's level of activity causes such large problems.
You can aggregate those and pay them out when they become big enough. Bitcoin stores every transaction on every computer on the Bitcoin network. It's not suited to transactions that small, you're just wasting everyone's resources.
No we do have the right, as these are our computer resources being wasted. This is not a protocol change of bitcoin - you can still make and include your tiny transactions. You'll just have to mine your own blocks if you want them on the chain. All this change does is give nodes the ability to set thresholds on which transactions are relayed and included on the blocks they mine.
When your transaction size affects my bandwidth and my hard disk, it becomes my business. Essentially you are acting like a spammer: wasting a disproportional amount of other people's computational resources for your gain.
That's like saying I have no right to stop email spammers and that they should have the right to decide if their email volume is appropriate or not.
Making sub-cent transactions is a waste of MY resources because every transaction gets duplicated to everyone's copy of the blockchain. That's spam. If we don't stop this then the blockchain will become so unwieldy that it makes Bitcoin all but useless for everyone, and that's not good for anyone.
Your read is correct. Once CPU time spent in decompression became less than disk wait time for the same data uncompressed, the reduced IO with compression started to win — sometimes massively. As powerful as processors are these days, results like these aren't impossible, or even terribly unlikely.
Consider the analogous (if simplified) case of logfile parsing, from my production syslog environment, with full query logging enabled:
# ls -lrt
-rw------- 1 root root 828096521 Apr 22 04:07 postgresql-query.log-20130421.gz
-rw------- 1 root root 8817070769 Apr 22 04:09 postgresql-query.log-20130422
# time zgrep -c duration postgresql-query.log-20130421.gz
# time grep -c duration postgresql-query.log-20130422
EDIT: I'm not sure why time(1) is reporting more "user" time than "real" time in the compressed case.
zgrep runs grep and gzip as two separate subprocesses, so if you have multiple CPUs then the entire job can accumulate more CPU time than wallclock time (so it's just showing you that you exploited some parallelism, with grep and gzip running simultaneously for part of the time).
I had an original IBM PC XT (used) with a 10MB full height (2x today's 5.25") MFM hard drive.. it had about 3MB of available disk space and took I swear 6+ minutes to boot.
It actually ran faster double-spaced (stacker) and had nearly 12MB of available space... didn't have any problems with programs loading, surprisingly enough.. which became more of an issue when moving onto a 486.
Yeah, when your storage is so relatively slow, the CPU can run compression, you can get impressive gains in space and performance.
Protocols are about consensus, almost by definition. In computer protocols, we get the consensus before we start using the protocol. In social interactions, we're molding the protocol as we use it.
As for communicating "correctly", it's a matter of (mostly) definitions and circumstances whether putting people off and "correct" communication are consistent. You may have transmitted the correct information to someone's brain, but not annoying people is usually an important goal, sometimes even more important than that of transmitting the information. Maintaining someone's good opinion of you might outweigh the importance of whatever info you want to tell them.
You can't communicate effectively without a protocol/language/etc, but communicating is not about consensus; it's about communicating ideas/thoughts/etc. Consensus is a part of communication, not its aim.
I agree that not offending/annoying someone is beneficial and might outweigh the message you have to communicate, but that isn't relevant to discussion about the efficiency of the protocol. If in spite of brusqueness, your point comes across, then it's effective communication.
Think of the "consensus" as the embedded state within a communication.
In a feudal society a lord might send a written missive to the King or Queen, and if they did it would contain a ton of horribly polite boilerplate, because the consensus of the time on both sides was that anything less was disrespectful.
It is very possible that your lack of words in a given communication sends a point across that you never intended, even if the point you had in mind also made it across.
Not quite. If you're using VSRE, the information contained in those headers is already implicit. I can't think of an example off the top of my head, but I'm sure there are protocols where you don't need to send loquacious headers or equivalent with every response. "VSRE" says, "go ahead and assume I know what the headers would be and skip them". If you're not expecting headers, it's not a problem if they're missing.
I was looking at this part:
>Note that from the 1814400 BTC awarded, 1148800 BTC has never been spent (63%). I suppose (but have not checked it yet) that these are exactly the segments that belong to the mystery entity
So I guess I'm confused the market bigger than the 1,814,400 that have been awarded somehow? I'm not that familiar with bitcoins so that's very possible, but the author seems to imply that Nakamoto has 63%.