Agree with you 100%, I did the same simulations and found the same result.
I would suggest a step beyond though, because rebalancing your portfolio is fun year 1-5, but not so fun year 5-20: have a look at e.g. Vanguard retirement target funds.
Essentially, it's an ETF with a rebalancing rule included for a specific target date. For instance if you buy the target 2050 (your hypothetical retirement age), the ETF rebalances itself between bonds/monetary fund/stocks until it reaches that date, u til it's pretty much all cash in 2050.
Lowest hassle diversified retirement scheme I found.
Nope not all cash, it goes down to around 50% stocks at the target date (and actually continues to get slightly more conservative after). Just look at the current portfolio of the Vanguard 2025 fund: https://investor.vanguard.com/investment-products/mutual-fun...
That fund is currently at 50% stocks. It does get more conservative as you get past the target date, but I was mainly referring to the stock percentage at target date, which the parent poster implied was almost zero.
You mean VFIFX? What a disaster. My retirement plan put me in that until I realized investment advice for young people is a tax on the inexperienced and vulnerable. VFFSX (S&P 500) does 2x better returns every time. I feel guilty saying it on Hacker News. Like pension funds I bet Vanguard is one of these so-called LPs who give money to VCs like Y Combinator to help ivy league kids follow their dreams. Without these heroes I'm not sure there'd be a startup economy. I just don't want to be the one who pays for it. I think the future Wall-E predicted with Buy N' Large is probably closer to the truth.
> VFFSX (S&P 500) does 2x better returns every time
US large cap has certainly recently outperformed the other parts of the target date fund (international stocks, bonds). But there is certainly no guarantee that it will happen "every time". In the last 10 years, US equity has been the best overall performing asset class for the past decade but 7 out of those 10 years at least one other category outperformed it: https://www.blackrock.com/corporate/insights/blackrock-inves...
> Like pension funds I bet Vanguard is one of these so-called LPs who give money to VCs like Y Combinator to help ivy league kids follow their dreams.
You can look up the holdings of VFIFX or any other Vanguard fund. There is no private equity or private credit.
And now GLD with its 1 year return at 40% is outperforming them both, which is a really scary thought. How bad do things have to be, that all our blood sweat and tears scurrying off to work each mourning earns less than a piece of metal dug out of the ground that sits around doing nothing? I thought inflation was supposed to be a tax on poors, but even rich private equity which gets ahead by sucking the blood out of Americans digging themselves out the grave can't save itself.
This is one of those things where again, one will have to weigh the costs of both alternatives. A rebalancing ETF usually has higher costs (management fees, but possibly also internal trading costs that show up as performance beneath benchmark index), but of course, manually rebalancing also has a cost – the cost of one's time and effort!
There are scenarios where these target date funds are not good.
Rebalancing into bonds and mmmfs is a form of insurance against catastrophic losses equities. But if you have a sufficiently large account then catastrophic losses that affect your life are extremely rare, if they do occur they will likely affect your bond portfolio as well, and the expected loss vs 100% equities over 15-20 years is significant, something like 10x the value of the insurance you are buying.
If you want insurance for a large account then long-dated put options 20% of the money are much cheaper.
> So, in C, when you design a function, and you want your function to be able to indicate “there was an error”, it has to return that error as its return value.
Well, you can also use an errno-like system. It has its own set of drawbacks as well, but it removes the "I need to reserve a sentinel value" problem.
Functions using errno typically still use a return value of -1 to indicate an error has occurred, then errno just stores the specific error code. Otherwise, how do you even know that you need to check errno?
There are examples of such an ambiguity in the ISO C library itself.
In those situations, the application must begin by clearing errno to zero.
Then, it checks for a certain error value or values from the function which are ambiguous: they could be legit or indicate an error.
If one of those values occurs, and if errno is nonzero, then the error occurred.
This is how you deal with, for instance the strtol (string to long int) function. If there is a range error, strtol returns LONG_MIN or LONG_MAX. Those are also valid values in the range of long, but when no error has occurred, they are produced without errno being touched.
strtol can also return 0 in another error case, when the input is such that no conversion can be performed. ISO C doesn't require errno to be set to anything in this case, unfortunately. The case is distinguished from a legitimate zero by the original pointer to the string being stored in *endptr (if the caller specifies endptr that is not null).
ISO C and POSIX library functions do not reset errno to zero. They either leave it alone or set it to a nonzero value.
If you need to use the above trick and are working inside a C library function, you have to save the original errno value before storing a zero in it, and then put that value back if no error has happened.
Centuries are also 50 years late on average... And everyone is on average 6 months older than they say.
Also moving the rounding 1 place down doesn't solve anything. Now your second-resolution clock is on average 500ms late unless you move the decimal 1 place down again.
- The UI is over bloated and bugged, sometimes things scroll, sometimes they don't, sometimes you have to refresh the page. You cannot easily change the UI as lots of CSS parts have hard coded fixed sizes.
- The settings are all over the place, from py files in ~/.jupyter to ini files to auto generated command line parameters.
- The overall architecture is monolithic and hard to break down, jupyter proxy is a good example of the hacks you have to go to to reuse parts of jupyter
- The front end technology (Lumino) is ad hoc and cannot be reused, I had to write my own react components basically reimplementing the whole protocol, come on its 2025.
- The whole automation around nbconvert is error prone and fragile
The _external_ way of doing introspection (a parser like yours) is generally very limiting in real life, as you will have to interpret all the preprocessor directives that your code follows.
Not only will you have to feed your parser the exact same inputs as your build system, but also some directives are built-in the compiler and may be hard to replicate.
The easiest way to do introspection is _intrusive_, even though it pollute the code a bit.
Regarding pre-processing, I feed the result of `clang -E` to the code-generator. Since I use a unity build (single translation unit), it works out fine. In fact, the parser treats pre-processor directives as comments (to work around #line and `-dD`, etc.)
Regarding external and intrusive, I used to do it in the intrusive way but found it too limiting. Here, I not only generate code but can also (potentially) add whole new extensions to the language. This was the reason I wrote a new parser instead of just using libclang's JSON AST dump. Well, that and the fact that libclang is a multi-MB dependency while my parser is ~3000 lines of C code.
> No, it's actually the reverse. You have to compare at equal annual vol, and the S&P already has something like 20%.
Stop thinking like a hedge fund.
TQQQ commonly is used as a benchmark because it represents a low-friction, practical alternative to VTI, VOO, and even private equity investments including hedge funds trading public securities.
Once your Sharpe is high enough, you stop caring about volatility. The only volatility is how many zeros in your almost-always positive PnL.
Hedge funds (and traditional asset managers) care about drawdown, vol, sortino, beta and all that shit. But hedge funds have a different business model than prop trading firms.
> Many quant trading firms make 50%-100% annual returns. The secret is leverage
Hu lol no XD you're way over stating it. While it happens _sometimes_, 50% or 100% is insanely rare, even for the top tier hedge funds.
Most HF work at predefined annual volatility, often in the 7% to 10% range. A typical _top tier_ sharpe is in the >=2 range, we're more talking about a 10%/25% averaged annual returns.
> However, the returns after fees, to the passive outside investor underperform S&P500.
That doesn't even make sense with the figures you posted. Most HF operate under the 2:20 or 3:30 range, sometimes 0:40 for the top 5. If you take a pessimist 10% returns on 10% annual vol, against the S&P 10% averaged returns at 20% vol, you're still double the risk adjusted returns, gross. Factor in 20 to 40% performance fees and you're way above the S&P.
> A typical _top tier_ sharpe is in the >=2 range, we're more talking about a 10%/25% averaged annual returns.
High-frequency low latency trading: Sharpe 10 or higher
Mid-frequency low latency trading: sharpe 4 to 5
Hedge fund statistical arbitrage: sharpe 1 to 2
Hedge fund long/short, event driven, global macro, etc: sharpe 0 to 1
And yes, HFT and MFT scales to billions in annual PnL for single firms.
There’s a reason quant HFT firms pay the most, and are ranked above OpenAI in pay and prestige. Hedge funds are tier 2 in comparison but not bad either.
We cannot really list them, as 90% of the time, it's not the websites themselves, it's their WAF. And there is a trend toward most company websites to be behind a WAF nowadays to avoid 1) annoying regulations (US companies putting geoloc on their websites to avoid EU cookie regulations) and 2) DDoS.
It's now pretty common to have cloudflare, AWS, etc WAFs as main endpoints, and these do anti bots (TLS fingerprinting, header fingerprinting, Javascript checks, capt has, etc).
I have no issue using closed source software when they are doing nothing of intellectual value. Free software's very purpose is to ensure things of intellectual value can be understood, and knowledge cannot be locked up.
GitHub does not fall in this category to me. There is of course smart and interesting engineering behind it, but functionally speaking its only added value is to be faster, more wide spread, and better UX than others.
It’s a third party platform which can de-platform you at the drop of a hat and cause a big headache. Enough people are worried about that and the broader implications of a platform not governed according to the fundamental principles/values of free software. So I can understand why they advocate for the free software ecosystem to steer clear.
“Intellectual value” is a somewhat arbitrary criterion.
> It’s a third party platform which can de-platform you at the drop of a hat
Yeah right but that's life.
Your argument is basically
"someone could stop willing to do business with me so they should give me the means to be independent of them".
That's not a valid reason, that's being unreasonable.
> cause a big headache
My very argument is that a "headache" is not enough. An "impossibility", or "an incredible effort" are reasons to worry. Switching to an other platform and having less cool UI and rewrite CI is definitely not a show stopper.
> “Intellectual value” is a somewhat arbitrary criterion.
What value is there in GitHub apart from their infrastructure, which is of no use to anyone unless you are willing to scale to their level, in which case you're just willing to build a competitor, which I have no issue for them to not give you the tools to do?
Oh, most definitely. So you gotta do what you gotta do.
> Your argument is basically "someone could stop willing to do business with me so they should give me the means to be independent of them". That's not a valid reason, that's being unreasonable.
On the contrary, we use that general principle every day when making decisions. For example, these aspects could be even more starkly highlighted when you make decisions as a business owner -- because the buck stops with you, and many other people might be depending on you. So you really don't want to back yourself into a corner.
We focus first on the biggest risks, and then move down the list. And the specific probabilities and costs we assign to each dependence might be different based on our cultural background and life experience. YMMV.
I mostly agree, but I also rely on GitHub Copilot, CI/CD, issues, pages, and storage (for releases). And that's just for my simple little side projects!
While I'm grateful GitHub is such a great service and provides these for free, I also worry about my reliance on it.
Some languages (Rust for example) also rely on GitHub for package management (I think just for auth though).
I think of this kind of thing as related to Lock-In. If some proprietary thing I use were to go away permanently, would that be a (perhaps significant) inconvenience, or would it be a disaster? If the former then there's not really lock-in and its more a "make hay while the sun shines" situation.
> Free software's very purpose is to ensure things of intellectual value can be understood, and knowledge cannot be locked up.
I've never heard this before. I was operating under the assumption that Free software was to keep us from being enslaved by the products we use, not that it was some kind of intellectual edification for people who enjoy coding.
I actually don't care the tiniest bit about whether coders have sufficient intellectual stimulation, so it's weird that I care about Free software.
It's both. Free software is a more efficient mode of production because it maximizes the exchange of ideas about how best to build software. It's also a more libertarian mode of consumption because it maximizes freedom of choice for users.
I would suggest a step beyond though, because rebalancing your portfolio is fun year 1-5, but not so fun year 5-20: have a look at e.g. Vanguard retirement target funds.
Essentially, it's an ETF with a rebalancing rule included for a specific target date. For instance if you buy the target 2050 (your hypothetical retirement age), the ETF rebalances itself between bonds/monetary fund/stocks until it reaches that date, u til it's pretty much all cash in 2050.
Lowest hassle diversified retirement scheme I found.
reply