Hacker News new | past | comments | ask | show | jobs | submit | pcmonk's comments login

The yield varies with a few factors, chief of which is how much total ETH is staked. More staked ETH means a lower yield. Here's a calculator: https://ethereumprice.org/eth-2-calculator/


Was down for a couple minutes, is back for me


Most of them don't seem to come from pull requests, I wonder if it's paired with a bunch of compromised github accounts?


Not compromised, just created by the attacker.


I think the answer is something like:

- the dollar is falling compared to US living expenses (US inflation)

- the euro is falling compared to the dollar (exchange rate)

- therefore, the euro is falling faster compared to US living expenses

- but the euro is not necessarily falling faster compared to EU living expenses (EU inflation)

Indeed, EU inflation and US inflation seem to be about the same, which means that the exchange rate change is not because one or the other is experiencing more inflation.


This isn't correct. There's no Solana tokens in the equation at all, so I assume you mean wrapped ETH on Solana, and that all got sent through the bridge to Ethereum, so it doesn't exist anymore.


Which wallet does the Solana tokens go when they are released from the wETH tokens?


In practice the people who are doing this very likely have that much ETH on hand and can simply use it, then rebuy over time to get back to their preferred holding levels.

If you had $320mm USD and needed ETH, you'd want to use an OTC desk, because the markets will feel it, even though they won't actually run out of depth. Right now the "+2% depth" (https://coinmarketcap.com/currencies/ethereum/markets/) on Coinbase, Binance, and FTX is about 5mm, and there's probably a dozen other markets with similar depth, though you'd want to execute the trades simultaneously. Decentralized exchanges have depth too; uniswap's USDC/ETH pool would let you buy $40mm for 2% slippage. All in all, you could probably get it all at about a 5% premium if you were really prepared for it.


I think this is the most sensible response ^


Interesting! Thanks for the response.


I found it interesting that the patch was apparently published before it was exploited: https://twitter.com/kelvinfichter/status/1489050921938132996


I wonder how their deployment system works. They should probably be deploying security patches before they land in a public repo.

Also, if it auto deploys from a git repo, then you just need a committer's git keys to exploit it. Having code auditing and multisig git tags has to be rare.


Doesn't it have to land in a public repo before it can be patched?

Somebody else is going to run that code publicly, and each person who runs it will find out about the patch with some time delay


> Doesn't it have to land in a public repo before it can be patched?

No, they could have patched the contract before publishing the commit on GitHub. Granted, an attacker could watch the chain for such "contract upgrade" transactions and attempt to front-run it, but that would be a lot harder than just discovering undeployed security patches on GitHub.


If it's a library normally you'd share a security patch with important customers privately, if they're otherwise going to lose $300 million. I thought this was the service's own repo though.


Smart Contracts always have their source openly available on the chain, so it’s not that easy


But that's also the executable form of it - just patch it first, and then people can't hack it when they see fixes land in the +1 release somewhere else.


I could be wrong but I believe only the compiled machine code is on-chain, you don't have to publish the source

this just happens to be a project that does


Yeah that definitely smells like someone was watching the commits for a security patch so that they could exploit it quickly before it deploys.


That's why it's sometimes called an "argument" or specifically a "cryptographic proof". You can construct the statement such that it can be "proven" in a traditional sense by adding qualifiers such as "with probability more than 1-1/(2^256)". You'll generally need an assumption like knowledge-of-exponent or at least hash soundness.


I think your n_subjects is too low. You need that to be high enough or you'll miss those low-probability winners that bring up the average.


I did it with more subjects and it doesn't make a difference. The only reason I reduced to 100 is because the plot is unreadable otherwise.

Looking at the Julia code, I think what he is doing wrong is making all wins worth $.50 and all losses worth $.40, but the bet computes a win or loss based on your current wealth, not your starting wealth. His formula would work if you were always betting $1 no matter what your bankroll was, but that isn't what the actual post stipulates.


So if you get lucky you de-risk and make small bets comparable to your initial winning bet, rather than betting it all.

Sounds like a good strategy whether in Vegas or Wall Street.


You're so fixed to your conclusion that you're now reading code wrong.

If you don't trust the Julia code, try running with the same parameters in Python.


I don't understand why you think changing the number of participants changes the ensemble average. I just ran it with 1,000,000 participants and 1,000 trials and ended up with an ensemble average of $0.07 on the 1,000th trial, trending toward 0. The only difference is the simulation took longer. The curve looks exactly the same. The ensemble average trends upward until about 500 trials, then trends downward and keeps doing so forever.

My code is right up there and you can run it. You can even just run the OP's notebook that he provided but increase the number of trials. Change the "num_flips_per_sim" parameter he provides in cell 6 to anything over 500 and you will always get sum(count_lose_capital) == everyone.


Take the outside view here — 3-4 people have commented, all disagreeing with you. One of them has offered an explanation of why you're experiment is poorly designed, and I've given you code which produces a different result.

The appropriate response to that is introspection, not repetition.


Look at my other reply, which was above but is now below. The number of participants required to be likely to find any who stay above water gets very high eventually, much higher than 1,000,000. This can just be calculated.

After 500 trials, you need 279 heads to stay above $1 net wealth. 1.5^278 + 0.6^222 = 0.50 and 1.5^279 + 0.6^221 = 1.26, so that's your breakeven point. The probability of getting at least 279 heads in 500 coin flips is 0.005364, so with 1,000 participants, you expect to see about 5 still above water.

At 1000 trials, the breakeven point becomes 558 and the probability of getting at least that many heads in 1000 flips is 0.00013614. So the expected number of people who stay above water in a pool of 1000 participants is 0. Out of 1,000,000, it is 13, so you're right, there are some, but at that point it's not nearly enough and we're not sampling the ones whose wealth is enough to actually bring the mean back up, so it keeps trending to 0 in any sample of a practical trial size.

This is a pretty interesting property of this problem, really. It's not related to ergodicity, but just the relative proportion of probability mass represented by above 1 and below one itself trending asymptotically toward 0 even though the analytical expectation trends toward infinity. I don't know that there is even a word for that, but seemingly which of those moves faster toward its limit would determine what sample ensemble average you really see when the number of realized states is far less than the number of possible states.

This probably has some implications for Pascal's Mugger type problems in decision theory. If some course of action has potentially infinite future payoff and destroys expected utility calculations because of that, but the expected number of possible universes in which a positive outcome happens at all trends toward 0 faster than the expectation trends toward infinity, that gives a decision rule. In this specific case, don't take this bet, at least not in an indefinitely repeating form.


Thanks for the thoughtful reply. The breakeven analysis is good!


No matter how many test subjects you use, if you run the experiment for a very long time, everyone goes bankrupt and will never recover.

More precisely there is a finite time after which no-one ever passes above $0.0000000000000001.

That is a mathematical theorem.

This doesn’t depend on the number of test subjects, and you can add as many zeroes as you want.

Therefore in the long run the mean outcome is 0.

Forgive me if I have misinterpreted what you are are trying to say.

Edit: I’ve just realized that I have indeed missed your point.


No matter for how long you run the experiment if you use enough subjects some of them will win an absurdly large amount of money and the sample mean will converge to the mean of the distribution (which grows exponentially with time).

It’s a mathematical theorem. (I would be curious to see a proof of your theorem, by the way.)


The mean converges exponentially to zero with time. It doesn’t grow exponentially. So the theorem you cited also goes in the same direction of my statement.


> The mean converges to zero with time. It doesn’t grow exponentially.

  t=0 mean(w) = 1
  t=1 mean(w) = 1/2*1.5 + 1/2*0.6 = 1.05
  t=2 mean(w) = 1/4*1.5*1.5 + 1/2*1.5*0.6 + 1/4*0.6*0.6 = 1.1025
  ....
  t   mean(w) = 1.05^t
Don’t you agree?


You’re right. I made a mistake - I thought you were trying to contradict the theorem I stated. I’ve just realized you were saying something orthogonal.

As for the proof of my theorem, By taking logarithms, the process becomes an additive random walk with negative drift (log 1.6 + log 0.5 < 0). This is well known to converge to negative infinity almost surely. After exponentiating to undo the logarithm, this is exactly the statement I made.

It does not matter how many test subjects there are ( as long as there’s finitely many) because, informally speaking , you can just wait for each of them to become irrevocably bankrupt in turn.


I think we agree then:

- for a fixed sample size we can find a time large enough that the probability of the sample mean being above $1 is as low as we want

- for a fixed time we can find a sample size large enough that the probability of the sample mean being below $1 is as low as we want

- when both the sample size and the horizon grow without limit which effect dominates will depend on how we make it happen

Adding "almost surely" to "everyone goes bankrupt and will never recover" or "there is a finite time after which no-one ever passes above $0.0000000000000001" is a subtle change but it's enough to allow for someone to go to infinity with infinitesimal probability.

This is why the distribution mean can grow exponentially, it wouldn't be possible if the everyone and no-one in those quotes were strictly true.


I agree with the first 3 statements.

Just to confirm, I am using 'almost surely' in the technical sense, which means 'with probability 1.'

Consider the following statement:

If you keep flipping a fair coin every day, it is almost sure that after some day you will have gotten a tails.

This is the same 'almost surely' that I am referring to.


We agree!

The point was that you didn't specify "almost surely" previously, that's why I asked for a proof to understand what did you mean exactly when you said that "everyone goes bankrupt and will never recover" and "or "there is a finite time after which no-one ever passes above $0.0000000000000001".

The mean of a random variable that is close to zero is close to zero, the mean of a random variable that is almost surely close to zero can be anything.


The number of possible outcomes grows exponentially with time, and so does the ensemble size required to capture the extremal behaviour. Repeated losses bring you closer to zero, which is relatively well sampled by many realisations, but repeated wins produce exponentially larger returns, and so missing out on these realisations catastrophically affects the ensemble average.

A shorter run (say 100 steps) would be more likely to capture enough realisations to produce a reasonable estimate. You could assess this behaviour yourself, for very low step numbers, by calculating the variability in a sampled ensemble average, relative to the exhaustive (i.e. true) ensemble average.

This particular problem is another consequence of the properties dynamical system being examined, but not quite the same as the issues caused by its non-ergodicity.


I was interested in seeing the results myself, so here is some python:

    import numpy as np
    import itertools
    from matplotlib import pyplot as plt

    def ensemble_mean(outcomes):
        # Assume we are given a (K, T) array of outcomes, and compute the ensemble average
        # for T+1 time steps, starting with 1 wealth.
        K, T = outcomes.shape
        X = np.ones((K, T+1), dtype=np.float64)
        X[:, 1:] = np.where(outcomes, 1.5, 0.6)
        Z = np.cumprod(X, axis=1)
        return Z.mean(axis=0)

    time_steps = 20

    all_outcomes = np.array(list(itertools.product([0, 1], repeat=time_steps-1)))
    exhaustive_mean = ensemble_mean(all_outcomes)

    ensemble_size = 100
    ensemble_samples = 10000

    ensemble_means = np.zeros((time_steps, ensemble_samples))
    for i in range(ensemble_samples):
        print(i)
        # generate ensembles as though we were sampling (i.e. with replacement)
        J = np.random.choice(all_outcomes.shape[0], size=ensemble_size, replace=True)
        ensemble_means[:, i] = ensemble_mean(all_outcomes[J, :])

    plt.hist(ensemble_means[-1], bins=1000, histtype='step')
    plt.axvline(exhaustive_mean[-1])
    plt.title("Modal sampled ensemble mean is below true ensemble mean")
    plt.show()


The most natural solution for most people is to give shards of your key to various friends/family that you trust not to collude and reconstitute your key (or be socially engineered -- make them talk with you on video chat or something). Require 5 out of the 9 shards to reconstitute it.

Obviously you can scale up your security according to the value of your account and your threat model.


That's a great method for preventing loss as opposed to allowing recovery.

We need to keep the conversation in recovery because eventually it'll happen. Your 5/9 people could have n+1 unwilling parties where n is the losable amount.

It is unrealistic to say it will _never_ happen.

When my identity is lost... is it lost for good? how do i recover?

If it's lost for good, and i make a new 'identity' then what is my 'identity'... is it just... my reddit username?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: