Hacker News new | past | comments | ask | show | jobs | submit login

Having been introduced to cryptocurrencies the technical route, I'd never really been exposed to the brand of hype in these Andreas Antonopoulos videos before. I guess I've never watched a televangelist either, but I imagine this would be the equivalent.

This piece of bullshit in particular stood out to me:

> Where were you when libor was fixed? Where were you when gold markets were fixed? When high-frequency traders used front-running...

It's odd that he uses these examples to support the "trustless" blockchain architecture, because blockchain does nothing to improve any of these. The opposite is true: it makes it far easier to manipulate markets because all transactions are anonymous, censorship-proof, and it's impossible to recover funds from bad actors.

Not having to trust a central authority for the ledger does nothing to prevent collusion or frontrunning in markets. From what I've seen, cryptocurrency markets are constantly being manipulated in ways that are both unethical and illegal.

> Bitcoin vs Ethereum is like Sharks vs Lions...

He claims that Bitcoin specializes in ways that Ethereum does not, but this isn't true: effectively, Bitcoin is a subset of Ethereum. He claims that Ethereum "scales 10x worse" than Bitcoin, but I don't know where he got that number from: the reality is more nuanced, but by any practical measure, Ethereum is already better-equipped to scale and the protocol is actively evolving (as opposed to Bitcoin's, which is immutable as long as its community remains as toxic as it currently does).

If he means the size of the full blockchain, in practice Ethereum implements compression techniques that make it much more compact--the Bitcoin blockchain is currently at about 100gb, while the uncompressed "archival" version of Ethereum is at 300gb but the standard compressed version only requires about 15gb.

In terms of transaction throughput, Bitcoin has a stupid hard limit of 1mb per 10mins of data, and so Ethereum far outscales it in that regard. If you hear the term "lightning network" as a counterargument, you can ignore it: the lightning network doesn't yet exist, and when it does, it will have several practical problems, including that it requires onchain scaling to work[1].

[1] http://cowpig.github.io/bitcoin/cryptocurrency/2017/06/24/Se...




> This piece of bullshit in particular stood out to me:

> Where were you when libor was fixed? Where were you when gold markets were fixed? When high-frequency traders used front-running...

It's odd that he uses these examples to support the "trustless" blockchain architecture, because blockchain does nothing to improve any of these. The opposite is true: it makes it far easier to manipulate markets because all transactions are anonymous, censorship-proof, and it's impossible to recover funds from bad actors.

It’s deeply frustrating hearing someone who is ostensibly an evangelist say these things. It’s actually worse than what you’re saying - not only does he demonstrate a lack of understanding about a blockchain’s market capabilities, he evidences a basic lack of understanding about the market.

For example, I don’t know what it’s going to take for people to understand that high frequency trading isn’t front-running. It’s frightening to me that a “thought leader” on stage can mention something so incorrect in passing and the audience will mostly just nod their heads and internalize it. If this person can’t be bothered to understand the very basic, cursory characteristics of his examples, what else can’t I trust him to tell me correctly that I can’t immediately figure out is wrong information?

It’s like being educated about a subject from someone who reads only the article headlines and top comments of stories on the front page of reddit. When I hear or read a claim that is 1) confident, 2) definitive, 3) utterly incorrect, I simply disengage from that person. It’s one thing to be ignorant and willing to be educated. It’s another thing entirely to assume a mantle of authority and promulgate this misinformation. It signals to me that the person is either incapable of, or unwilling to use critical thinking.


While I agree with the gist of your comment (and would add that I think HFT is great for the economy as it boosts the liquidity of capital markets), some HFT trading does involve front-running in the sense that financial institutions invest in infrastructure to reduce latency for their HFT traders in order to trade on information not widely available to the public


> some HFT trading does involve front-running in the sense that financial institutions invest in infrastructure to reduce latency for their HFT traders in order to trade on information not widely available to the public

...which is not front-running, nor is it illegal. This is exactly what I'm talking about.

These words have a very specific legal definition - this is not a case of, "I wish people still used the word 'hacker' like they did in the 80s!". There is no vernacular drift; ascribing the term "front-running" to high frequency trading is virtually always incorrect. Its continued use muddles any attempt at legitimate discussion about the subject because there are several groups of people involved who:

1. understand both HFT and front-running, and do not use the latter to describe the former.

2. understand neither HFT nor front-running, and conflate the two liberally,

3. understand HFT but not front-running, and use the latter to describe the former,

4. understand front-running but not HFT, and "borrow" the former term to describe the latter (inappropriately).

...etc. Throw in ideological bias and you get a debate in which everyone talks past each other and no one learns anything.


I think it's pretty fair to call HFT strategies that rely on low-latency connections in order to gain a competitive advantage front-running.

It's definitely front-running in the commonly-understood sense of the word, and sometimes it's specifically front-running in the sense you're talking about: a large company puts in orders for large buys on several exchanges, e.g., and a low-latency HFT sees one of them before it hits the other markets, and puts those buys in first.

edit: This theoretically _will_ happen without any malicious actors, btw: HFT uses machine learning and its inputs include recent trades. Recent trades include large buys from said bank, which previous history indicates the price will go up. HFT puts the buy in. That's front-running.


> I think it's pretty fair to call HFT strategies that rely on low-latency connections in order to gain a competitive advantage front-running.

I'm not sure what you mean by "fair", but no, (again), it's not front-running.

> It's definitely front-running in the commonly-understood sense of the word

No, it's not. At what point do we get to come up with arbitrary definitions simply because many people misunderstand them? Would you like "front-running" to become as meaningless a term as "literally" is for emphasis?

> a large company puts in orders for large buys on several exchanges, e.g., and a low-latency HFT sees one of them before it hits the other markets, and puts those buys in first.

This is not how latency arbitrage works. Exchanges have a fiduciary duty to you as your trading intermediary. They may sell your order flow to other parties, such as HFT firms, but they are still liable for how that information is used. High frequency traders are not capable of seeing your order before it arrives at an exchange. While they will have order flow visibility and buy/sell positions at multiple exchanges, they also must abide by a NBBO price, which means they cannot cross the spread. You cannot bid above the best offer and skip ahead of existing orders in the book.

They are free to alter their prices in reaction to orders in different exchanges, but this is designed to improve accuracy, not target any single particular trade. This also does not constitute front-running, in either the real definition or the imaginary one where they can magically skip ahead of you.

> This theoretically _will_ happen without any malicious actors, btw: HFT uses machine learning and its inputs include recent trades. Recent trades include large buys from said bank, which previous history indicates the price will go up. HFT puts the buy in. That's front-running.

I don't understand what you're getting at here. This is not front-running.

This thread is becoming kafkaesque...do you have a problem with basic information asymmetry in the market?


I am not sure how Franz Kafka would define it, but the NASDAQ[1] defines front-running as:

> Entering into an equity trade, options or futures contracts with advance knowledge of a block transaction that will influence the price of the underlying security to capitalize on the trade[...]

Can you explain how the practice I described does not fit this definition? Or provide a definition that better suits your position?

[1] http://www.nasdaq.com/investing/glossary/f/front-running


Sure, that definition is fine.

> Entering into an equity trade, options or futures contracts with advance knowledge of a block transaction that will influence the price of the underlying security to capitalize on the trade. This practice is expressly forbidden by the SEC. Traders are not allowed to act on nonpublic information to trade ahead of customers lacking that knowledge.

The key terms here are “advance” and “nonpublic.” “Advance knowledge” and “nonpublic information” refer to information that other market participants could not have had without breaking a confidentiality agreement or a fiduciary duty. It does not refer to information which is merely difficult to find nor does it refer to information which is public but unevenly distributed. HFT firms cannot see your orders before the exchange receives them and executes them. They are further not obligated to a confidential or fiduciary duty with respect to your interactions with the exchange once the orders are executed, and the order flow is technically “public” once broadcasted by the exchange.

HFT firms are very fast, but they are fundamentally operating by the same processes as other market participants. They purchase order flow, use colocation and write extremely low latency bespoke trading software, but they are not intrinsically doing something that bypasses the normal operating procedure of the markets.

Information asymmetry is not illegal if you’ve “earned” it, in the same way that you can “insider trade” if you come upon “secret” information without breaking a confidentiality agreement or fiduciary duty. There are legitimate arguments that can be made against HFT, but starting off with “front-running” is absolutely not one of them. You need to levy an argument that the same processes of information asymmetry break down in ways that are bad for overall market efficiency and price discovery at high speed, not that HFT firms are doing something other than “information asymmetry, fast.” And this is a hard argument to make in full view of the liquidity offered to the market by automated market making algorithms, i.e. HFT.


Your argument is that latency-based front-running isn't front-running because the front-runners "earned it" via whatever system they built to front-run the trades.

> HFT firms are very fast, but they are fundamentally operating by the same processes

This is the key part I disagree with. A lot of firms set up shop right next to exchanges and built fiber optic connections between them, so that it's impossible for anyone without such a setup to trade on the same information.

I don't see how you could interpret that as "good" for anyone other than the HFT firms who do it best. It's pretty clearly this:

>bad for overall market efficiency


> Your argument is that latency-based front-running isn't front-running because the front-runners "earned it" via whatever system they built to front-run the trades.

It’s not front-running.

> This is the key part I disagree with.

You’re not just disagreeing, you’re incorrect by definition. I explained why in my last comment. It’s not front-running to gain an advantage through capital expenditure. Capital expenditure (like colocation) is not fundamentally an advantage other market participants don’t have. It doesn’t matter if you think it is, it’s not. Insider trading is an example of an advantage that is fundamentally unavailable to other market participants. If we accept your argument, we might as well say that spending more money than competitors to build a better performing fund is unfair.

> I don't see how you could interpret that as "good" for anyone other than the HFT firms who do it best. It's pretty clearly this: >bad for overall market efficiency

No, it’s very clearly not, for anyone who has worked in finance or who is familiar with how market making works. High frequency traders improve net liquidity in the market. Would you prefer the pit days of preferential treatment by manual market makers, or much slower (and thus less accurate) price discovery?

I don’t know what else to tell you at this point. You have done exactly the thing I was talking about in my original comment. I’ve explained to you that high frequency trading does not satisfy any legal definition of front-running. You’re free to make up an arbitrary definition of front-running, but it’s not the correct definition. Do you need me to cite research showing why high frequency trading improves liquidity and explaining why it can’t step in front of other retail traders’ orders?


> Capital expenditure (like colocation) is not fundamentally an advantage other market participants don’t have.

Except that it is. Market participants cannot all be right next to exchanges, for physical impossibility reasons.

> High frequency traders improve net liquidity in the market.

I agree that there are forms of high-frequency trading that improve market liquidity

> Would you prefer the pit days of preferential treatment by manual market makers...

This is a straw man

> legal definition of front-running

I'm not a lawyer, and don't know what courts have decided about this issue. I'm guessing they've sided with you. But there are plenty of situations where judges make decisions I disagree with, and I'm guessing there are probably some practical issues with making this kind of front-running illegal anyway. Have any other exchanges opened with rules that address this?

Anyway, I certainly don't think all HFT are evil or anything like that, but what you call "latency arbitrage" is clearly front-running and bad for the market: it's basically a "distance from the exchange" tax that doesn't provide any value. I don't know how that should be fixed though, or whether it's possible.

Also, I would love some links on ways in which HFTs are good for the market :)


If I fly a plane over various Walmart and competitor stores, and buy and sell stock based on which parking lots are the busiest, is that front running because it's impossible for anyone without the money to rent a plane to have access to the same information?


Let's take take a textbook example of HFT with orders larger than single exchange can satisfy therefore overflowing to other exchanges and examine timeline. 1. Investor places order at exchange A --> 2. order is known to both exchange A and HFT --> 3. HFT places reactive order in exchange B --> 4. exchange B receives "overflown" order from exchange A.

The time between exchange A receiving original order (point 2.) and exchange B receiving remains of that order (point 4.) is extremely small, but larger than zero. The argument here is that until exchange B acknowledges reception of the original order in its ledger, that information is not public at exchange B and acting upon that order in exchange B until exchange B acknowledges said order is acting on non-public information, therefore front-running.

> <...> the order flow is technically “public” once broadcasted by the exchange.

The core argument I see in this debate is whether we consider exchange network as one large source of public information with multiple access points and accept non-immediate distribution of information through the network as unavoidable technical detail.


At that point the HFT is guessing and hoping the second order will actually arrive at exchange B. That's the risk the HFT takes.

There is no automatic overflow function that will transfer (the rest of) an order between two exchanges. At best either the original investor can decide to split up his order, or someone else can decide to buy at B and sell at A. But still the HFT will be hoping that actually happens.


And again, making predictions about the market is not front running.


Front running is, very specifically, when an exchange uses information about its own customers' trades to profit.

Example: Customer places an order to sell 1 million shares of Apple. Exchange shorts Apple, sells customer's shares causing price to go down, picks up cheaper shares to cover short.

HFTs don't do this, they just arbitrage between exchanges really quickly.


Yeah, if you are interested in the technical aspects of cryptocurrencies it can be really frustrating because there's so much bullshit and unsubstantiated hype out there. It's as hyped as deep learning, but I guess at least there the field is not as politicized by economics and therefore a bit less prone to the evangelist type of person (similar amount of marketing going on though).

Maybe it's best to ignore anything in medium blog form or similar that wants to explain the blockchain to you. Fortunately, the field is mature enough that there are learning resources by people who know what they are doing out now (for instance the Bitcoin technology book by Arvind Narayanan and others seems pretty good).


The Lightning Network is apparently live:

https://github.com/lightningnetwork/lnd/issues/468

The title makes you less confident, the comment:

> At any given point, we may be sending an additional satoshi to miner's fees. This is a result of the internal usage of milli-satoshis and our policy of always rounding down (to go from mSAT -> SAT)

makes you deeply skeptical.


> If he means the size of the full blockchain, in practice Ethereum implements compression techniques that make it much more compact--the Bitcoin blockchain is currently at about 100gb, while the uncompressed "archival" version of Ethereum is at 300gb but the standard compressed version only requires about 15gb.

Are you referring to "pruning"? Bitcoin can do this too and pruning is not compression as it cause big information loss (you need to download information from other, non-pruning nodes if you want to bootstrap new one).


I do mean pruning, which for most practical use-cases is fine, and also the patricia tree compression


> Bitcoin is a subset of Ethereum

This is not true at all, and a strange statement to make from someone who identifies with the technical side of cryptocurrencies.

Ethereum made two important departures from the Bitcoin design. One is to have accounts instead of unspent transactions (UTXOs). We don't know if Satoshi considered an account based design first, but it is not unlikely since it is a design one is likely to think of first. We can however make an educated guess as to why the UTXO model was chosen instead. It enables SPV wallets which is mentioned as early as in the whitepaper.

Ethereum can not have thin and trustless wallets by design. SPV wallets wasn't really as straightforward as people thought them to be, and one design isn't necessarily "better" than the other, but one is not a subset of the other.

The other important change was bytecode to support compiling from popular languages, such as javascript, instead of specialized bytecodes to support the low level operations of the protocol. This is a different design that requires changes to other aspects of the protocol, including the fee model to a price on execution time. Bitcoin programs are supposed to anchor to the chain, not completely live on it.

If crypto kittens were written for Bitcoin instead of Ethereum, they would need to have certain logic off chain but track things like ownership on chain. (Compare for exampel with the Lighthouse model.) This is a different model, which is only semi on-chain and would therefore scale better.

Doing the same operations to the contract requires more transactions and produces more data in the Ethereum model. So Bitcoin would likely continue to function where Ethereum struggles (to the point where big exchanges just turn off their Ethereum processing when large events such as big ICOs happens).

> the uncompressed "archival" version of Ethereum is at 300gb but the standard compressed version only requires about 15gb

It is not interesting how big the chain is but how fast it grows. Transaction throughput is capacity, not scaling. Scaling is a measure of change.

And Etherum growth is concerning to the developers. Have you tried running a full node? I have. Even a beefy machine with SSD takes weeks to bootstrap, and the load is getting worse over time, not better. It's not clear for how long this can be kept up.

Sure, you can trust a third party to deliver you the blockchain. But someone must independently audit the blockchain, otherwise we would be much better off by running audited Postgres nodes at trusted institutions (like the global DNS infrastructure works) than just trusting a Foundation.

The "compressed" Bitcoin blockchain, by comparison, is as small as 1 GB and just as functional. But the comparison is uninteresting, because any application that can make do with such a heavily pruned blockchain would probably be better off with an SPV architecture.

Please don't read this is some sort of comparison or defense of Bitcoin's architecture. One is not "better" than the other. Ethereum changed a few fundamental design decisions and the result is different, therefore the use cases are different. One is certainly not a subset of the other.


You could implement the Bitcoin protocol in Ethereum, and the Ethereum protocol, practically speaking, does everything the Bitcoin protocol does plus a bunch of other things. That's what I mean when I say "subset".


What does that mean, exactly? You clearly couldn't implement Bitcoin the software since there's no way to speak the protocol or even utilize the data structures involved, let alone the absurd amounts of data that needs to be moved. It's like saying you could implement MySQL in Ethereum, which at least has no canonical global data store to it.

You could parse scripts and verify transactions, and that might be useful inside the Ethereum world. But it wouldn't be useful outside of it. It would still scale like Ethereum and you would need to process the entirety of the Ethereum blockchain and the Bitcoin blockchain to do useful operations.


What's your opinion on Raiden then?


The whole timelocked channels idea makes sense in situations where you have one central hub (like an exchange) or two parties transacting with each other a ton.

It doesn't work as a replacement for transactions generally, though.


> Ethereum is already better-equipped to scale and the protocol is actively evolving

That is laughably false. Ethereum node decentralization is cratering, because it is becoming impossible to even synchronize a node anymore. I'd suggest you're not as technically inclined as you assert if you don't recognize this.

> In terms of transaction throughput, Bitcoin has a stupid hard limit of 1mb per 10mins of data

Bitcoin has a hard-limit in order to protect the decentralization of its nodes, and the consensus rules of the protocol. It's OK, I suppose, if you don't mind forking a supposedly immutable blockchain every time a few big investors lose money, but bitcoin is a slightly better value proposition than that.


Seems you’ve bought into Roger Ver’s snake oil. 1mb limit isn’t stupid, it’s what creates the fee market which makes spamming the blockchain very costly and keeps blockchain size in check. Already you wouldn’t be able to sync the ethereum chain from genesis block, I can still do that on raspberry pi node in less than two weeks. Whether the limit should be upped is still a question up for debate but the reason it wasn’t yet is because there was no consensus about it and contentious hardforks are dangerous.


> Seems you’ve bought into Roger Ver’s snake oil.

I've heard this a few times online despite not really knowing anything about Roger Ver. I am capable of coming to logical conclusions on my own.

> 1mb limit isn’t stupid, it’s what creates the fee market which makes spamming the blockchain very costly and keeps blockchain size in check.

Very costly in the sense that an individual Bitcoin transaction curently costs about $20, an order of magnitude more than my bank transfers cost.

> Already you wouldn’t be able to sync the ethereum chain from genesis block, I can still do that on raspberry pi node in less than two weeks.

I literally synced the Ethereum chain on my laptop last weekend.

> Whether the limit should be upped is still a question up for debate

This is only true if you are using the phrase "up for debate" in the sense that smoking being bad for you is up for debate, or that we should do something about climate change is up for debate, or that net neutrality is up for debate: there are powerful self-interested actors for whom the obvious conclusion is expensive.


> Very costly in the sense that an individual Bitcoin transaction curently costs about $20

you've cherry-picked the exact point in time when due to various factors ("civil war" between bitcoin and bitcoin cash supporters, propaganda campaigns, price rally) transaction fees are at their peak. if you look at the graph - https://jochen-hoenicke.de/queue/#all you'll realize that for majority of this year transaction fees were well below 50c. this is what happens if you blindly believe the echo chamber.

besides, i'm not convinved all and every transaction belong on the chain. chain provides bunch of very strong guarantees that no other system in the world provides - it's ridiculous to claim that those guarantees are not worth anything.

> I literally synced the Ethereum chain on my laptop last weekend.

lol, nope. let me guess, `geth --fast`? well, i suggest you go read up on what `--fast` means. i challenge you to try syncing from the genesis block.

> This is only true if you are using the phrase "up for debate" in the sense that smoking being bad for you is up for debate

no, it is literally up for debate. there is a multitude of bottlenecks involved in having a healthy blockchain and a network around it, they are all affected by bumping the blocksize in non-trivial ways through secondary and tertiary effects. and it's not even obvious we can handle the primary effect of growing blockchain and utxo set - that quite literally leads to centralization of full nodes via making it prohibitively expensive to run it which runs against one of the core tenets of bitcoin itself.


> you've cherry-picked the exact point in time...

I picked this number because it's what I paid yesterday

> lol, nope. let me guess, `geth --fast`? well, i suggest you go read up on what `--fast` means. i challenge you to try syncing from the genesis block.

I used Parity, and instead of suggesting I "go read up on" something I already understand, maybe you'd be more convincing if you made an argument as to why the difference should matter (it doesn't).

> bunch of broad, abstract claims

I can't argue with any of those because you didn't provide any reasoning or facts to address.


> I picked this number because it's what I paid yesterday

right, so you got unlucky and now you're sad, i get it. just don't start misrepresenting what's actually happening.

> the rest

there is plenty written on both scaling and checkpointing/snapshotting, seems you've picked your side, don't see the point spending my sunday evening convincing you of anything. time will tell.


> there is plenty written on both scaling and checkpointing/snapshotting,

Not the op, but I'd be interested in seeing sources. I haven't made up my mind.


Does it really keep the blockchain size in check? Either storage costs shrink at an exponential rate, in which case the block size basically doesn't matter at all, or they don't, in which case the blockchain will eventually grow too large anyway.


The SegWit2X non-debate is never really about keeping the blockchain size (i.e. transactional history) in check as that could be reconstructed relatively easily on demand. However the UTXO set (list of all unspent transaction outputs) has to be kept small because every incoming transaction has to be validated against this ~3GB data set that has been growing by several hundred megabytes per year. Ideally this database should be kept in memory for maximum efficiency but RAM is expensive, so a elaborate cache scheme had to be developed to make sure that nodes remain functional on low power hardware with limited memory and I/O which in turn limits the number of SPV clients each node could serve. Thus on-chain transactions has to be kept to the minimum to slow the growth of UTXOs and SegWit provides some incentive for people to consolidate dust outputs despite fee pressure. So far this strategy has not been working.

Ethereum does not have this limitation because balances are kept in individual accounts, thus the requirement for global consistency is a lot lower.


You can get 4GB of RAM for less than $100.[0] That doesn't seem so expensive to me.

[0] https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&D...


It is a lot for embedded and mobile devices, even the ability to address 4GB memory adds to the cost of processor. Right now full history nodes on phones are a pipe dream, and people who run full nodes on raspberry pis are actually degrading the performance of the network by adding a tonne of slow nodes with 1GB memory or less.

In the long run the UTXO set will always grow because of lost keys and dust amounts worth less than the fees to move them, yet it is politically unacceptable nor practical to drop any of these from the consensus, so the problem will only get worse.

It is, of course, not a problem if you think SPV wallets are the way forward, but this view has been stigmatised as "centralisation" so it also leads to no real solutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: