Nasdaq, Inc. is a company with a stock market ("the NASDAQ") and an index "Nasdaq 100"). They want SpaceX to be listed on their market, because they like having more things on their market for all the usual reasons. They are, apparently, offering to manipulate their index to win the listing.
Accordingly, anything that uses or tracks this particular index (Nasdaq 100), such as the QQQ fund, will potentially have to pay for this manipulation.
Anybody not holding or indexing to the Nasdaq 100 index contents will not particularly care and will not really gain or lose any more money than on an ordinary trading day. In particular, this will have zero effect on stocks that merely trade on the NASDAQ exchange.
Indexing to the Nasdaq 100 is pretty uncommon, outside of QQQ, so most people will not care.
What?! This absolutely affects more than Nasdaq 100 / QQQ.
The index is just a function of the stocks. It only moves if the underlying stocks move. Rebalancing Nasdaq will cause selling in the 100 companies that aren’t SpaceX. And those stocks are held elsewhere too…
The Nasdaq 100 shares 79/100 stocks with the S&P. So if those stocks move (probably down because they’re being sold so SpaceX can get bought) pretty sure that's gonna affect anyone exposed to those companies. Whether that’s directly or through other index ETFs. Many of which have a huge concentration in Mag7 right now, for example.
What you're saying is 100% correct, I fail to see how people are not aware of it.
We're talking about a $1.75 trillion (as per the article) company that is about to enter (a part) of the most important capital market in the world at a distorted price, of course that the market as a whole is going to become distorted, money and capital (and the accompanying money and capital signals) are one of the most "liquid" things in a modern economy (if not the most liquid), once you start putting a wrong price tag on them then those accompanying money and capital signals will for sure start doing their thing, imo that was one of the main lessons we should have taken from what happened back in 2008-2009.
Sorry, a lot of the comments around this have been really badly written and it's been hard to tell what they're actually arguing.
I countered a different argument (which does appear elsewhere in this thread). You are absolutely right that there will be general price distortion from this mess. I disagree that it will be extremely bad, but I do agree that it's a problem and needs attention. It's just been difficult to tell that this is what some comments have meant to discuss, instead of the more basic issues others have been talking about.
Unfortunately a gaming machine workload is so read-heavy that I wouldn't expect Optane to square up well. Gaming is all about read speed and overall capacity. You need that heavy I/O mix, especially with low latency deadlines, to see gains from Optane. That limited target use case, coupled with ignorant benchmarking, always limited them.
Around the time of Optane's discontinuation, the rumor mill was saying that the real reason it got the axe was that it couldn't be shrunk any, so its costs would never go down. Does anyone know if that's true? I never heard anything solid, but it made a lot of sense given what we know about Optane's fab process.
And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?
I think it was killed primarily because the DIMM version had a terrible programming API. There was no way to pin a cache line, update it and flush, so no existing database buffer pool algorithms were compatible with it. Some academic work tried to address this, but I don’t know of any products.
The SSD form factor wasn’t any faster at writes than NAND + capacitor-backed power loss protection. The read path was faster, but only in time to first byte. NAND had comparable / better throughput. I forget where the cutoff was, but I think it was less than 4-16KB, which are typical database read sizes.
So, the DIMMs were unprogrammable, and the SSDs had a “sometimes faster, but it depends” performance story.
No; the issue with the DIMMs wasn’t drivers. The issue was that the only people allowed to target the DIMMs directly were the xeon hardware team.
There was a startup doing good work with similar storage chips that were pin (BGA) compatible with standard memory. Not sure what happened to them. That’d be easier to program than xpoint.
As for the new PCIe standard (you probably mean CXL), that’s also basically dead on arrival. The CPU is the power and money bottleneck for the applications it targets, so they provide a synchronous hardware API that stalls the processor pipeline when accessing high-latency devices.
Contrast this to NVMe, which can be set up to either never block the CPU or amortize multiple I/Os per cache miss.
Companies like NVIDIA are already able to maintain massive I/O concurrency over PCIe without CXL, because they have a programming model (the GPU) that supports it. CXL might be a small win for that.
Interesting perspective re CXL synchronous API. Wouldn't things like OOO execution and speculation help with that? And anyway the latency is supposed to be comparable to NUMA latency, is that really such a deal breaker?
The DIMMs were their own shitshow and I don't know how they even made it as far as they did.
The SSDs were never going to be dominant at straight read or write workloads, but they were absolutely king of the hill at mixed workloads because, as you note, time to first byte was so low that they switched between read and write faster than anything short of DRAM. This was really, really useful for a lot of workloads, but benchmarkers rarely bothered to look at this corner... despite it being, say, the exact workload of an OS boot drive.
For years there was nothing that could touch them in that corner (OS drive, swap drive, etc) and to this day it's unclear if the best modern drives still can or can't compete.
That's at least physically half-plausible, but it would be a terrible reason if true. 3.5 in. format hard drives can't be shrunk any, and their costs are correspondingly high, but they still sell - newer versions of NVMe even provide support for them. Same for LTO tape cartridges. Perhaps they expected other persistent-memory technologies to ultimately do better, but we haven't really seen this.
Worth noting though that Optane is also power-hungry for writes compared to NAND. Even when it was current, people noticed this. It's a blocker for many otherwise-plausible use cases, especially re: modern large-scale AI where power is a key consideration.
You're looking at the entirely wrong kind of shrinking. Hard drives are still (gradually) improving storage density: the physical size of a byte on a platter does go down over time.
Optane's memory cells had little or no room for shrinking, and Optane lacked 3D NAND's ability to add more layers with only a small cost increase.
I don't think the shrink problem is at all the same for the two technologies. There are some really weird materials and production steps in Optane that are simply not present when making Flash cells.
The actual strength of Optane was on mixed workloads. It's hard to write a flash cell (read-erase-write cycle, higher program voltage, settling time, et cetera). Optane didn't have any of that baggage.
This showed up as amazing numbers on a 50%-read, 50%-write mix. Which, guess what, a lot of real workloads have, but benchmarks don't often cover well. This is why it's a great OS boot drive: there's so much cruddy logging going on (writes) at the same time as reads to actually load the OS. So Optane was king there.
It was also the best boot drive money could buy. Still is, I think, though other comments in the thread ask how it compares against today's best, which I'd also love to see.
This concept was very popular back in the days when computers used to boot from HDD, but now it doesn't make much sense. I wouldn't notice If my laptop boots for 5 sec instead of 10.
At the time of their introduction Optane drives were noticeably faster to boot your machine than even the fastest available Flash SSD. So in a workstation with multiple hard drives installed anyway, buying one to boot off of made decent sense.
If they had been cheaper, I think they'd have been really, really popular.
By my reckoning, there was zero overlap between the period of time where a reasonable computer configurer would pick a hard drive to boot from and the period of time where Optane was available.
And even for the general concept of a cache drive, I don't think I've ever seen it do well in the mainstream. Just a few niche roles, and some hybrid drives that sucked because they had small flash chips and only used them as a read cache, not a write cache.
Windows would do just fine. But the state of cheap Windows laptops is abysmal, and Windows as a product is in the doghouse lately because... well, I honestly don't know why Microsoft is doing what they're doing, but from the outside they certainly do appear to want to ruin Windows.
I've been a windows/linux/mac guy since forever (I do not care at all about the OS, I just care about getting shit done), and Windows is worse than the XP and 7 days, but not by much. A caveat here is that I'm assuming windows people are savvy enough to know about massgrave, and as such remedy 90% of the shit experience with vendors filling up an otherwise acceptable OS with a bunch of garbage.
The only thing in Win11 user experience wise that absolutely drives me up a wall is the new right click menu forcing me to hold shift to get the usable menu instead of the "Win 11 is smart and this new menu UI is easier to use" menu.
Other than that, it feels like win 10 (and 7 for the most part) for anything else that matters (for a normal user).
All of that being said, yes, the experience of a naive consumer buying a windows laptop is awful, but not due to the OS itself, rather the amount of bloated useless shit vendors ship with the installed OS.
I've been called out more than once for using too much italics in my writing.
But the trick is I usually write like I would speak. This leads to italicizing any word or phrase I'd speak emphatically. (Which, yes, I've also been called out for doing a lot when I speak. So what; I've also been told I'm good at getting my point across. I'll take it!) In any text important enough to go through multiple revisions, or to be written from the start with multiple revisions in mind, this characteristic is diminished. But most text is more throwaway, just like most speech, so it gets left a little rough.
This also tends to feel pretty natural. If you read LLM-written text out loud, or the prose TFA is talking about, it... does not feel natural at all. So what I'm trying to say is: some level of emphasis is just fine. Don't overthink it.
I think it's getting more and more common, quickly. The obvious inference is that people are using LLMs a lot and starting to mimic them, consciously or unconsciously. (Probably the latter: if people have weak internal models of how to write well, being around a lot of LLM text can probably influence them pretty quickly.)
> Several commenters suggested the original essay was written by an LLM. They were half right. Both that essay and this one were written with Claude as a drafting partner. I directed the argument; the LLM helped with prose. I mention this not as confession but as demonstration: the human brought the utility function, the machine brought the compute. If that division of labour bothers you, I’d suggest the discomfort says more about the Bitter Lesson than about my writing process.
This paragraph is pretty condescending to your reader. Whatever else is going on with AI authors, the fact is that if your reader can tell you wrote a piece with AI (and I could with this one), you fucked up.
I think one of the longer-term consequences of AI authors will be that writing gets shorter. There's a lot of fluff in a lot of writing (though not as much as there used to be in say the 19th century), and much of it's culturally expected. We might end up at a place where writing is much shorter and readers expect their own AI assistants to fill in the gaps. That might not be so bad.
But if you can't write a piece without AI, do you understand what you've written? It could go either way. But the condescension here combined with the obvious tells do not make me think highly of this author and his argument.
We have no idea what "drafting partner" means in that case. Maybe the person isn't a native English speaker or is for whatever other reason insecure about their prose? It would be sad if they couldn't make their argument because of that.
I honestly don't like the style of the essay either - maybe reading HN now trains one to view every "It's not X, it's Y" with suspicion. But as long as it's only the style and the author didn't get the entire argument from AI, I think it's worth skipping over it and focus on what they want to say.
(That's the difference I see to AI slop: with slop, there is no message to parse out because everything is generated. If the author here really only used AI to clean up their prose, I'm fine with it)
You should never assume the compiler is allowed to reorder floating-point computations like it does with integers. Integer math is exact, within its domain. Floating-point math is not. The IEEE-754 standard knows this, and the compiler knows this.
Ah, fair point, it has been a while since I've needed fast inexact math.
Though... they are allowed to cache common subexpressions, and my point about dependency chains is quite relevant on modern hardware. So x*x, x*x*x, etc may each be computed once. And since arithmetic operators are left-to-right associative, the rather ugly code, as written, is fast and not as wasteful as it appears.
> And since arithmetic operators are left-to-right associative, the rather ugly code, as written, is fast and not as wasteful as it appears.
This is incorrect, for exactly the reason you are citing: A * x * x * x * x = (((A * x) * x) * x) * x), which means that (x * x) is nowhere to be seen in the expression and cannot be factored out. Now, if you wrote x * x * x * x * A instead, _then_ the compiler could have done partial CSE against the term with B, although still not as much as you'd like.
Nasdaq, Inc. is a company with a stock market ("the NASDAQ") and an index "Nasdaq 100"). They want SpaceX to be listed on their market, because they like having more things on their market for all the usual reasons. They are, apparently, offering to manipulate their index to win the listing.
Accordingly, anything that uses or tracks this particular index (Nasdaq 100), such as the QQQ fund, will potentially have to pay for this manipulation.
Anybody not holding or indexing to the Nasdaq 100 index contents will not particularly care and will not really gain or lose any more money than on an ordinary trading day. In particular, this will have zero effect on stocks that merely trade on the NASDAQ exchange.
Indexing to the Nasdaq 100 is pretty uncommon, outside of QQQ, so most people will not care.
reply