Hacker News new | past | comments | ask | show | jobs | submit | Galanwe's comments login

My thought exactly... When Musk was asked about potential conflict of interest he basically replied "Of course not, we're transparent and document everything on X"...

> Are there other dimensions that might be important for a particular use case so a different algo may be best?

Choosing a compression/decompression algo is an optimization problem essentially. It mainly depends on your IO for input.

For instance, if you have very fast IO, say a local SSD at 7GB/s, then you probably don't want compression, as even the fastest decompression will not go beyond 4GB/s.

If you have moderately fast IO, say 4GB/s, then lz4 is likely a good candidate, you will get maybe a 0.5 compression ratio, meaning you will spend half the time reading your input, while decompressing at nearly the same same speed that you would read uncompressed data.

If you have very slow IO, say 100MB/s, then the compression ratio becomes the main factor, overriding decompression speed.

So, essentially, you want to solve an optimization problem of lowering the time to read the compressed file and decompressing it, conditioned by your reading speed, writing speed, and CPU to spare, under a time constraint of compression speed.

There are solvers on internet doe this problem, but I would love to see someone write it down here in plain math.


> There is an extremely high probability NYSE TX remains in Mahwah

> I wish one of these venues would have the conviction to put their whole kit in Dallas

To be fair, few people actually host in Mahwah or Carteret anymore. The HFT game is essentially closed, as the price of entry in terms of knowledge and capital is too high.

Most people just prefer a comfy Equinix DC with all modern amenities, where NYSE, Nasdaq and all other markets & brokers have a latency-stable point of presence, such as NY4.


You did raise a good point about latency sensitive brokers, and I feel like that is becoming a self-selecting cartel based around the NY/NJ area. So when you have these folks talking about moving to places like Houston or Dallas, in an implied or entailed way they are talking about breaking that latency cartel. Honestly, just by stupid personal opinion, the SEC should grow the courage to ban low-latency based trading. This topic has been beat to death over the years, so I'm not adding anything to the discussion, just chiming in to say the obvious things... have a nice day.

Just for the record, the latency arb / microwave networks speed game is basically dead as of 2018-2021. Looks at Virtu, formerly classic examples of the trade and now almost entirely "switched sides" and doing order execution services for the same big banks whose lunch they were formerly eating.

Furthermore, the wireless stuff is commoditized at this point. You can just rent to be on the wireless that Apsara (et al) offer, and while some have private networks, there's not enough money left in the trade (see above) to be worth it if you don't already have one.

This is combined with liquidity moving away from public exchanges (both the lits and darks) towards being matched internally/by a partner (PFOF matching), which is purely a win for retail traders and is its own force that isn't going away. (Go on robinhood and buy 2 shares of SPY. It fills instantly. People love that. You can't just go get 2 shares of SPY off the lits, so where dyou think those are coming from?)

Traditional HFT is dead. The only extent any of the firms are still alive is the extent to which they've moved on to other trades, many of which are so much less latency sensitive that the microwave edge doesn't really give you enough alpha to be worth it.

(I worked for a firm for a long time that didnt move on to other trades... so I'm quite familiar with the scene.)


> You can't just go get 2 shares of SPY off the lits, so where dyou think those are coming from?

Why can't you just get 2 shares on an exchange?


Exchange trading happens in round lots that are usually 100 shares.

This is pretty much just a legacy thing, but so many technical systems have this assumption built in that while odd-lot trading (trades not in the round lot size) has become a little more common on the exchanges, it’s still treated weirdly by the various systems involved.

But also, it’s better for you as a retail investor, to get them from a middleman, because they will generally give you a better price than the exchange. They will give you a better price because retail traders tend on average to be worse at trading than the overall market. You should take advantage of that, regardless of your actual ability level.


My research suggests that the majority of on-exchange trades are odd lots.

For stocks like SPY (those over $500 per share!), the vast majority of orders are odd lots.

This article is many years old and already has data strongly in that direction: https://www.nasdaq.com/articles/odd-facts-about-odd-lots-202...


Odd lots don't contribute to the NBBO, and placing an order for an odd lot doesn't have to execute within the NBBO. (People can trade "past" you, I am pretty sure ISO's don't need to clear you, etc). (Note these are rules for market participants, not retail customers). So for a firm trying to argue they provide excellent price improvement and execution efficiency for their customers, they can't "just" send the orders to the lits.

And even if they could "just" do so, internal matching typically provides better price improvement on the NBBO than even the best execution you could get off the lits.

Edit: But yes TBC, you're correct that odd lot trades aren't unusual. But you're seeing trades there by actual market participants, not retail orders. They're not just trying to get those 2 shares, there's a broader strategy and they're aware of all the above nitty gritty.


Sadly, firms abuse odd lot rules to give people terrible prices: https://academiccommons.columbia.edu/doi/10.7916/2y01-1s13/d...

In example 3, the NBBO for stock ABC is 495--500, but there is also an odd lot offer for 497 on exchange. If a Robinhood customer sends a market buy order, then Citadel is allowed to fill it for 499.999 even though it's better to send to the exchange. (And if they then pick up the odd lot themselves, it's easy arbitrage.)

By the way, while you're correct about some of your claims, odd lot executions definitely have to occur within the NBBO. (How could it be otherwise?) Otherwise, in the example above, Citadel would give an even worse price!


I mostly mean scenarios where your limit order might not be marketable, end up resting on the book, and then get traded "through". I'm speaking from the perspective of an actual direct market participant, where you're not using a market order but are trying to enter a position while adding liquidity/collecting a rebate. (Most exchanges reward participants who have some % of their trades as liquidity-added, with rebate tiers).

Round lots are excluded from the NBBO so that the NBBO can't be as easily influenced by quantities of shares that don't represent any material price signal. 1 share of practically anything but BRK class A represents ~nothing. Less than a round lot on a price level is basically no liquidity available at that level.


There are per ticker rules to allow odd lots on most US markets. AFAIK unless you're trading penny stocks, every stock out there is entitled for odd lots, and most trades are indeed odd lots, that has been the case for 10 years at least.

Even if there wasn't, I guess at least half the trading on stocks is through CFDs and not cash, so lots aren't even a thing for most investors.


> the SEC should grow the courage

In the current political environment, I don't see SEC (or any other gov't agency) growing courage anytime soon. Well, other than DOGE acting like an energy vampire growing stronger off of its victims.


Yeah, the time to grow the courage to do the right thing was the last 4 years but seems on one could be bothered so now we are here

Too bad they used the last 4 years mostly to do the absurdly stupid wrong thing.

These “folks” run their “Chicago” exchange out of NJ and this announcement has nothing to do with moving the matching engine.

> Honestly, just by stupid personal opinion, the SEC should grow the courage to ban low-latency based trading.

I think that's really just a matter of the media giving bad press to HFTs "because it's scary". The boring reality is that not much people care, and HFTs are really not that important on the grand scale of things. We're talking about maybe 4/5 firms worldwide making single to low double digits billions in P&L, from an activity that is most likely overall positive, or say net 0 if you're a bit cynical. Good for them.


> We're talking about maybe 4/5 firms worldwide making single to low double digits billions in P&L

That is a fair amount of money being soaked up by a few firms. If low latency trading was banned real humans could compete for that money.


> That is a fair amount of money

Honestly, that's not even a peanut compared to what more typical finance institutions manage and earn.

Your typical institutional investor (pension funds, insurance company, fund of fund, bank, etc) manages in the 100s to 1000 billions. Each.

The whole HFT industry probably makes what a single institutional investor earns by buying US debt at 1%.

The HFT industry really is just a small microcosm, it just so happens that it triggers dreams and fantaisies in the public mind.

> If low latency trading was banned real humans could compete for that money.

But that's what we had before, and was it better ? I don't think having 1000s of trader monkeys buying and selling while refreshing their price feeds or shouting in a pit is any better.

At the end of the day, as long as there will be market inefficiencies, there will be arbitragers. I don't see the point of kicking those arbitraging at 1us to replace them with people arbitraging at 1s or 1m.


For a while at least HFC was able to pay a lot of money to some really smart people to do some really weird high performance computing projects. Sure to the industry the amount of money they has wasn't even a peanut, but to a normal person on the street it was still a lot of money, and even to the financial industry it was still worth (for a while) paying high salaries to the very best for that fraction of a peanut.

I worked for a NYSE Specialist floor broker back in the day.

HFT, in a weird way, democratized market making while lowering spreads.

Remember it wasn't that long ago that spreads were 10-100x as wide as they are today, PLUS transaction costs were $5/10/50 per trade.

HFT & payment for order flow is what has made stock trading the low fee environment it is today.


> HFT & payment for order flow is what has made stock trading the low fee environment it is today.

I get how payment for order flow would help enable this current low upfront fee trading system we have today, they're managing to get their money from places other than direct fees. I don't exactly get how HFT also makes it low cost. Could you further explain that? Is it that mostly the people paying for the order flow is pretty much exclusively HFTs, and if they didn't exist the order flow market wouldn't exist?

Making up numbers here, if the HFTs manage to squeeze a dollar of profit out of the order flow data after buying my trade data for a dollar (two dollars of spread they manage to find), is that really better than me paying a dollar or two in fees for that order? It would be interesting to see the real values in question here on such things to actually gauge what is better for an average trader now trading in the low to zero fee trade market.


Spreads on bid/ask used to literally be at least 25cents, they didn't even use decimals. Would you rather pay 1 guy in a funny jacket with a palm pilot in NY 25 penny or a bunch of computer nerds sniping each other 1 penny?

Let's say you buy $10k of some $50 stock today and decide to sell tomorrow. In the old days you'd have paid say $10 to your broker to buy, and $10 again to sell. Your bid-ask spread in isolation of any price changes in the stock would be 25cents per share x ($10k / $50 = 200 shares) = another $50 in spread. So you're all-in transaction costs would have been $70.

Now you probably have a no-fee brokerage, and generally a penny spread. So same formula is 1cent per share x (200 shares) = $2 in spread + $0 in fees. So you're all-in transaction costs would be $2.

$2 vs $70 on $10k round trip investment. 2bps vs 70bps.


> I don't exactly get how HFT also makes it low cost

Because most HFT firms are also market makers. You can see them basically as middlemen that are mandated by the exchange to provide liquidity on both bid and ask by the market. These liquidity mandates reduce the spread for other traders, and in exchange market makers have lower, or even positive fees (i.e. they are _paid_ to trade).

Usually, market makers use these rebates to earn money by taking a passive order risk on behalf of an aggressive order from a flow they bought.

Think of it that way:

You're an exchange, you want people to trade on your platform, that's how you earn money.

For people to trade on your platform, you need liquidity, actual shares to buy and sell. So you invite market makers on your platform, and sign a contract with them, along the lines of "you have 0 trading fee but in exchange you need to provide $X of liquidity on bid and ask at any time and ensure a spread <Ybps".

Market makers accepting to on-board now have to somehow make a living while providing liquidity, but this is a risky business, because they are basically market making for people that are _more_ informed than them (they have adverse selection by design), and they have to respect their mandate of providing liquidity. That is, if a stock goes down, and people start selling it, the market maker still need to provide liquidity for sellers and buyers, which means maybe he will have to actually buy these shares that are tanking.

Usually the pure market making mandate is close to 0 profit, unless you spice it up with some other strategy. Taking passive order risk, netting order flow, maybe short term technical alpha, etc.

You can think of market makers and HFT basically as the same people. If you trade at high frequency, you're playing on micro changes in price, there's only so much a stock price can realistically move in 1s. HFT is only viable if you have very low, or no transaction cost. That's why there's a natural overlap between HFTs and MM.


And as a refresher for why HFT exists - it's a side effect of RegNMS. RegNMS exists because the guys in funny jackets in NY with palm pilots were ripping people off.

There was no consolidated tape and obligation for exchanges to route your order for Best Execution. There was no National Best Bid Best Offer.

There was just whatever price the exchange your broker sent your order to filled you at.

Some exchanges cough NASDAQ cough used to do things like display sub-penny quotes even though they only filled at full penny increments. So they could attract flow advertising prices they wouldn't file you at. See Rule 612.


We had that system, it was dominated by expensive pit traders and the spreads (the main cost driver for most people) were gigantic.

This argument is precisely Luddite and a strange position for anyone on this particular forum.


No sure why this got downvoted, and I'm not sure what else people think "humans competing for that money" would look like?

You're gonna need to physically collocate those people if you are trying to ban computers and latency based trading. Possibly in a "Pit" maybe in a building called an "Exchange" in places with a lot of financial services people like say NY or Chicago. Probably need to have some sort of membership/license requirement due to finite space. I dunno. Sounds like a novel concept that's never been tried.


Whenever this topic comes up I always wonder how many people commenting have ever heard of the odd eighths scandal.

Before the internet? Things are a bit different now. HFT monopolized the market maker/arbitrage money with millisecond executions that nobody can compete with.

You said "If low latency trading was banned real humans could compete for that money."

As soon as you re-introduce distance, latency becomes a factor again. How do you eliminate "low latency trading" and prioritize "real humans" without putting them in the same room?

What do you actually propose here?

One way to reduce the impact of latency is to do away with continuous trading and move to frequent but discrete auctions. But this would just increase volatility.

Imagine if every X minutes / hours stocks moved Y% like they do at market open, as all the information that was disseminated since the last auction was re-priced in.

If anything the long term trend has been towards longer continuous trading sessions to reduce those types of jumps.


I've wondered if introducing a small but omnipresent random delay in all trading requests might suffice. Something like 0-100 milliseconds. Just enough to moderate some of the advantage that physically co-located automated traders have, while not outright banning it.

Can you not use this information of it being closed for high frequency trading?

Also I have pondered, to put it very simply, why wouldn't it be profitable, since the markets fluctuate, to gamble small amounts constantly at where they most often (this is more complex but you could simply draw line as well) cross a line, such that you can always lose everything, and then sell every time it goes over that line & buy when it's below regardless of how much you would make?


If we model the stock market as a random walk/Weiner process and I got the math right, I think you're almost certainly going to make money (if you stop when you reach a given profit threshold), but you will have arbitrarily large drawdowns. This is similar in terms of the risk/return profile to the martingale betting system. https://en.wikipedia.org/wiki/Martingale_(betting_system)

Oh yeah, I think in my head I confuse the size of the bet with the size of the ranges where it could be claimed to frequently cross the line between sell or buy (which I proposed would be what is most frequently crossed over), but in normal stock markets the market never hits the bottom. Compare this – real markets – to discrete signal between in [0.00,5.00] with smallest change being 0.25, and the market frequently going to zero or near zero, where know it often goes from [0.00, 1.75] to [2.25, 5.00]. In such case you wouldn't lose as heavily. So then markets can be roughly & informally modeled as what's the relationship between how much it somehow "often" changes vs. what's the risk cost, e.g. since normally stocks don't drop to zero constantly what's the bottom part in the traditional market graph that stays the same (it doesn't seem trivial to me what's the best way to consider what's often and what's the bottom part as the bottom part may change over time even if it most of time in real world remains constant, but I think the idea doesn't need one to know the exact alforhitm as yet to speak of values that if often crosses.

So what, then is the exact information value of these candy bars of when the stock has not changed value? What do they tell us? And moreover, are they consistently valued, since the primary tail risk seems to be (probably, I am not expert) market crash, which means one would expect each to have the unchanging candy bar to relative to future performance, so that if we have a reasonable assumption of market crash probability, then some pattern should emerge & things should make sense?

I believe it's trivial to formalize this point & honestly fruitless to not to figure it out, but I will post this comment & perhaps later on return to this. To me the primary here is that the candy bar is what matters, and that if any markets like my [0.00, 5.00] market exist, my strategy would be profitable in those.

Moreover, I think in trading strategy the idea that one wants to guess how fast they can cross the threshold to not to lose, to be able to "Martingale" as you out it is valuable & kellyable.


For clarity, I don't consider anything mentioned even practically feasible or relevant.

I mentally tripped over by forgetting that if stock costs 500 + [0,10] (where it fluctuates) normally, you must in order to participate even without any fees pay 500 + [0,10] and not just the [0,10].


It's historical.

The US always had the same currency, so shares are almost by design fungible between markets. Also the rise of Nasdaq made for a quick transition to broker-dealer (market maker driven) markets, which combined with fungible shares and NBBO lead to a centralisation of exchanges early on.

On the contrary, Europe had historically one market of primary listing per country/currency, and it took a long time to see the emergence of MTFs centralizing books in a single place. Don't be fooled though, the vast majority of European liquidity is now on CBOE (the leading MTF) and LSE (the leading primary market).


> So, are the exchanges connected?

More or less, in practice your US broker has to respect "best execution", meaning it cannot offer you a price worst than the "NBBO", which is the composite of all US stock markets.

In practice though, I expect most companies listed in NYSE Texas to be solely listed there.

> “normal NYSE” vs “NYSE Texas”

There's already _a lot_ of "NYSE" markets and segments. Some exist because of historical mergers (Amex, Arca), and some to offer alternate / cheaper listings (NYSE listing is very expensive vs say Nasdaq)

> What benefits would companies see from listing on NYSE Texas vs the other NYSE?

Unclear at the moment but I expect a lower bar for listing requirements and a cheaper price. Nyse is by far the most expensive listing there is, but it's also very exhaustive on listing requirements (audits, etc). I guess there is also a political/economical deal with Texas at play, to incentivise companies to move there, list there, and grab some of NY market share.


Would this mean that the trading hours for stock of companies listed in both NYSE branches would be extended by one hour due to time differences?

Well technically there are different trading hours, calendars and sessions for each market / segment of NYSE. In practice though I don't think you will find many companies listed on multiple subsegments of NYSE in 2025.

Roughly speaking, you will find the prestigious / big market cap / rich companies listed on NYSE main market, ETFs on NYSE Arca, small caps on NYSE American (ex Amex), and very small caps on NYSE National.

The listing requirements and prices, as well as fee structure also differ for each market. National has a fee structure to incentive adding liquidity for instance.

If the asynchronicity bothers you, imagine that you can also trade e.g. HSBC secondary listing on NYSE NY market hours + the ADR of HSBC HK on NYSE NY hours + HSBC primary listing on LSE on London hours + HSBC HKSE on HK hours for a lot of fun.


My variation is to use a custom script as `ProxyCommand` that resolves private route53 DNS names to instance ids, because remembering instance IDs is insane.

Mine is to run a Tailscale node on a tiny ec2 instance. Not only enabling ssh but direct access to database instances, s3 buckets that are blocked from public access etc

How are S3 buckets blocked from public access? I mean I know there is literally a “Block public access” feature that keeps S3 buckets from being read or written by unauthenticated users. But as far as I know without some really weird bucket ACLs you can still access S3 buckets if you have the IAM credentials.

Before anyone well actually’s me. Yes I know you can also route S3 via AWS internal network with VPC Endpoints between AWS services.


In general, condition keys

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...

And https://docs.aws.amazon.com/service-authorization/latest/ref...

Specifically the vpce one as the other poster mentioned but there's other like IP limits

Another way is an IdP that supports network or device rules. For instance, Cloudflare Access and Okta you can add policies where they'll only let you auth if you meet device or network requirements which achieved the same thing


> Specifically the vpce one as the other poster mentioned but there's other like IP limits

IPs don't cut it to prevent public access. I can create my own personal AWS account, with the private IP I want, and use the credentials from there. There's really just VPC endpoints AFAIK.


You essentially add a policy that limits the access to only come from your VPC endpoint.

I run an EC2 instance with SSM enabled. I then use the AWS CLI to port forward into the 'private' database instance or whatever from my desktop. The nice thing about this is it's all native AWS stuff, no need for 3rd party packages, etc.

I'm being picky here, but I don't think you portray an fair view of Kuhn's epistemology here.

Kuhn does not define a value-scale of both methods, on the contrary, he merely introduces the concept of different researchs: one being critical (developing new paradigms) and one being accumulating (further refining existing paradigms).

He also hints to the almost inevitably organic interactions between the two, such that critical research naturally evolves from a pragmatic need to express things simply from a new paradigm when the old one becomes too clumsy for a use case.

This is what happened in your example as well. Copernic (and later Galileo) did not invent heliocentrism out of the blue, the theory around it existed since antic Greece. It is even arguably the Renaissance, leading metaphysicists to revisit ancien texts, that spurred the idea to Copernic to consider it. But ultimately the need for the new paradigm was pushed by the need to revisit the calendar, which was drifting, and the difficulty to do it in a geocentric world, where you have to take planet retrocession into account.


Most non trivial programs I've seen in Rust just end up with Arc<> and copies all over the place, because getting lifetimes right becomes borderline insanity.

It totally depends on the program. Sometimes you need to use Arc, sometimes you don't. Arc doesn't involve any runtime asserts though and it's not a crime to use it.

For example I just ripgrep. Arc is used 29 times, compared to 185 uses of Vec.


> I can't imagine any new systems programming languages gaining traction that aren't at least as safe as Rust.

I don't think safety is a 1 dimensional spectrum (say from 0% to 100%). I view it more as a multidimensional thing, and tradeoffs between these dimensions.

Rust invested a lot on _some dimension_ of safety, which is essentially static borrowing and aliasing. And the tradeoffs for this are numerous: Dynamic safety is either much harder in Rust (due to unsafe being vastly more dangerous and hard to get right vs say, a C program), or at the expense of performance (I.e. just Arc<Mutex<>> or copy everything). That also comes at the expense of an immense cognitive burden for the programmer.

I see a lot of space for different tradeoffs on different dimensions of safety.


> due to unsafe being vastly more dangerous and hard to get right vs say, a C program

It's not true. Both C (clang), C++, and Rust use same LLVM compiler, so both have same constraints in the unsafe mode. C/C++ developers just not care.


I don't think this is true. Rust has constraints that C/C++ doesn't have. For instance, it's undefined behavior to create more than one exclusive (mutable) reference to the same object or to create one where a shared reference already exists. This is not necessarily easy to ensure.

The aliasing rules in C are much more lax: you only can't have several pointers of different types pointing to the same object, except if one of them is a character pointer (ignoring the restrict keyword, which is opt-in and rarely used).


I don't think this is quite the same comparison. In Rust, multiple mutable pointers to the same object can exist at the same time. So, it's similar to C in this way. It is mutable references that must be exclusive.

It's besides the point whether C pointers are more similar to Rust pointers or references. It's even true that pointers BY THEMSELVES have fewer constraints in Rust than in C . It's in the interaction between pointers and references that it's very easy to trigger undefined behavior in Rust.

Besides the fact I already mentioned about the dangers of casting pointers to references, there's also the problem that pointers are only valid as long as no operations are done with references to the same object (no interleaving). On top of it, the autoborrowing rules make it so it's not always clear when a reference is being taken (and operated upon).

So yes, in my opinion _unsafe_ Rust is significantly more difficult to get right than C.


But only Rust uses its equivalent to C "restrict" wrt. its shared and mutable references (unless dealing with UnsafeCell<>) for alias analysis optimization. (In fact, this used to trigger bugs in LLVM because it was hard to test its aliasing optimizations with purely C code.)

It’s not just restrict. Rust indeed does have much more strict rules that even though at first glance seem similar to C, the optimizer is allowed to do weird things if you violate a lot harder to understand rules. This has been very well documented.

For example, if I recall correctly it’s technically UB to cast to a *mut pointer unless the original variable was declared mut even if you have exclusive ownership over that object.

There was a fantastic blog post describing a lot of these nuances but I can’t find it.


Rust has stricter rules than C++. E.g. global mutable variables are way more dangerous in Rust than in C++ - just creating a reference to it can be immediate undefined behaviour.

In Rust, use UnsafeCell.

In C++, multiple references to the same variable may cause «use after free» bug in it destructor.


> I don't think safety is a 1 dimensional spectrum (say from 0% to 100%). I view it more as a multidimensional thing, and tradeoffs between these dimensions.

Absolutely. This is evident in the different language features of Rust and Ada. At any rate, Rust has advanced the state of the art when it comes to memory safety. The next generation of programming languages are inevitably going to be compared against Rust. In the same way Rust has been compared against Ada. Especially given Rust's popularity. I imagine that if a next-generation 'systems language' can't at least match Rust's features, it won't achieve wider adoption.


Agree with you 100%, I did the same simulations and found the same result.

I would suggest a step beyond though, because rebalancing your portfolio is fun year 1-5, but not so fun year 5-20: have a look at e.g. Vanguard retirement target funds.

Essentially, it's an ETF with a rebalancing rule included for a specific target date. For instance if you buy the target 2050 (your hypothetical retirement age), the ETF rebalances itself between bonds/monetary fund/stocks until it reaches that date, u til it's pretty much all cash in 2050.

Lowest hassle diversified retirement scheme I found.


Nope not all cash, it goes down to around 50% stocks at the target date (and actually continues to get slightly more conservative after). Just look at the current portfolio of the Vanguard 2025 fund: https://investor.vanguard.com/investment-products/mutual-fun...


You still need to be invested in equities at retirement otherwise inflation just eats away at the value of the cash


That’s why these target funds go down to ~50% stocks [edit: at the target date] not 0.


At least at Vanguard the final stage of target date funds is ~30% stocks: https://investor.vanguard.com/investment-products/mutual-fun...


That fund is currently at 50% stocks. It does get more conservative as you get past the target date, but I was mainly referring to the stock percentage at target date, which the parent poster implied was almost zero.


My target date 2050 funds have performed 50% less than my S&P 500 and like 30/40% less than my total stock market fund.


You mean VFIFX? What a disaster. My retirement plan put me in that until I realized investment advice for young people is a tax on the inexperienced and vulnerable. VFFSX (S&P 500) does 2x better returns every time. I feel guilty saying it on Hacker News. Like pension funds I bet Vanguard is one of these so-called LPs who give money to VCs like Y Combinator to help ivy league kids follow their dreams. Without these heroes I'm not sure there'd be a startup economy. I just don't want to be the one who pays for it. I think the future Wall-E predicted with Buy N' Large is probably closer to the truth.


> VFFSX (S&P 500) does 2x better returns every time

US large cap has certainly recently outperformed the other parts of the target date fund (international stocks, bonds). But there is certainly no guarantee that it will happen "every time". In the last 10 years, US equity has been the best overall performing asset class for the past decade but 7 out of those 10 years at least one other category outperformed it: https://www.blackrock.com/corporate/insights/blackrock-inves...

> Like pension funds I bet Vanguard is one of these so-called LPs who give money to VCs like Y Combinator to help ivy league kids follow their dreams.

You can look up the holdings of VFIFX or any other Vanguard fund. There is no private equity or private credit.


And now GLD with its 1 year return at 40% is outperforming them both, which is a really scary thought. How bad do things have to be, that all our blood sweat and tears scurrying off to work each mourning earns less than a piece of metal dug out of the ground that sits around doing nothing? I thought inflation was supposed to be a tax on poors, but even rich private equity which gets ahead by sucking the blood out of Americans digging themselves out the grave can't save itself.


This is one of those things where again, one will have to weigh the costs of both alternatives. A rebalancing ETF usually has higher costs (management fees, but possibly also internal trading costs that show up as performance beneath benchmark index), but of course, manually rebalancing also has a cost – the cost of one's time and effort!


Vanguard's ETFs are really cheap. The retirement funds in question are like 0.24%, which is in the cheaper range for ETFs


There are scenarios where these target date funds are not good.

Rebalancing into bonds and mmmfs is a form of insurance against catastrophic losses equities. But if you have a sufficiently large account then catastrophic losses that affect your life are extremely rare, if they do occur they will likely affect your bond portfolio as well, and the expected loss vs 100% equities over 15-20 years is significant, something like 10x the value of the insurance you are buying.

If you want insurance for a large account then long-dated put options 20% of the money are much cheaper.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: