Hacker News new | past | comments | ask | show | jobs | submit | jiggawatts's comments login

For mechanical drives, the ratio between IOPS and capacity has been getting exponentially worse over time.[1] This means that random seeks available per unit of data is getting so bad now that in cloud hosting they consider a 128 KB read to be "one" operation for cost calculations. The capacity unit for a single I/O used to be 512 bytes!

This is why I came here to make the same comment you just did.

A modern 30 TB data centre drive has about 300 IOPS, so that's just one random seek per 100 GB per second! Ouch.

I don't get why manufacturers don't make drives with actuator arms in all four corners, and heads that can move independently on both sides of every platter.

There might be some vibration and cross-talk issues, but surely they can be overcome with modern digital servo control technology.

That would allow 4x the IOPS just form having four sets of arms, and then 2x the IOPS because there's arms above and below platters, and then Nx where 'N' is the number of platters. Let's say there's 10, so that's 4x2x10 = 80x the IOPS for the same capacity!


Another thing I was always curious about is why swing-arm actuators stuck around all these years, rather than moving to a single fixed rail with a strip of individually addressable read regions, and another rail separate (and upstream) from this with write capability).

No more mechanical actuation at all, eliminating a huge amount of complicated precision machined componentry as well as the voice coil itself.

Mounting this fixed rail to multiple structural points inside the case means there would be zero possibility of a head crash.

No cross interference or mode switching between reading and writing. The entire region passing under the rail can be scanned simultaneously, or if it's not possible to manufacture such a sensor at a data density to match the drive platter then perhaps the rail could shift a millimetre or so back and forth to allow micron-fine positioning from a milimeter-course array of heads. Much less mass to move, likely simplifies the math needed to meet the data in flight, and reduces the total reciprocating moment to a single linear axis solution.

I know I can't be the only person, nor the 1st, to have considered this idea. I suspect multiple variations may be lurking in the patent portfolios of the big storage mfrs. But I would love to know why nothing resembling it has ever been tried in production over the 30 or 40 something years that hard drives have been mass commoditized.


> perhaps the rail could shift a millimetre or so back and forth to allow micron-fine positioning from a milimeter-course array of heads.

You're talking in terms of millimeters and microns. That is roughly the scale that CDs operated at.

According to https://blog.stuffedcow.net/2019/09/hard-disk-geometry-micro...

> In the newest drive I have, average track pitch is 80 nm and an average bit is 17 nm in length.

That's for hard drives providing 1TB per (double-sided) platter, less than half the density of today's most advanced hard drives.


A relative of mine designed ink-jet printer heads as wide as the paper.

Something similar could work with hard drives: an array of about 3 to 4 thin strips of silicon chips with a line of read/write heads.


Another way to look at is that the ratio between capacity and physical IOPS constraints have gotten exponentially better over time. It's just that you can't treat all that capacity as hot capacity anymore.

I recall a whitepaper on HP high-end storage arrays from over 10 years ago, their arrays already read and cached at least 256kB of data, even if you wanted only a 4kB block.


It's this just a response to the narrowing of the use cases for mechanical drives, so they specialize at long term, infrequent access, large storage?

The "who" was William R. Lucas.

There was a recent Netflix documentary where they interviewed him. He was the NASA manager that made the final call.

On video, he flatly stated that he would make the same decision again and had no regrets: https://www.syfy.com/syfy-wire/netflix-challenger-final-flig...

I had never seen anyone who is more obviously a psychopath than this guy.

You know that theory that people like that gravitate towards management positions? Yeah... it's this guy. Literally him. Happy to send people into the meat grinder for "progress", even though no actually scientific progress of any import was planned for the Challenger mission. It was mostly a publicity stunt!


Maybe he did it because he knew the shuttle was garbage (the absurd design was Air Force political BS) and he wanted NASA to stop using it.

Much more realistically:

Individual A reports a unique or rare problem. Everyone knows it is reported by person A.

Nothing is done.

Person A reports the problem "anonymously" to some third party, which raises a stink about the problem.

Now everyone knows that person A reported the problem to the third party.

This is why I (almost) never blow the whistle. It's an automatic career-ending move, and any protections are make-believe at best.


Then Person A needs to haul their butt to the Defense Service Office, call their Member of Congress, and tell the "anonymous" hotline that they've been retaliated against.

I'm not pretending this is some magic ticket to puppy-rainbow-fairy land where retaliation never occurs, but ultimately, how much do you care about your shipmates? I had a CPO once as one of my direct reports committing major misconduct and threatening my shop with retaliation if they reported it. I could have helped crush the bastard if someone had come forward to me, but no one ever did until I'd turned over the division to someone else, after which it blew up. Sure, he eventually got found out, but still. He was a great con artist and he pulled the wool over my eyes, but all I'd have needed is one person cluing me in to that snake.

Speaking from the senior officer level, we're not all some cabal trying to sweep shit under the rug. And the IGs, as much as they're feared, aren't out to nail people to the wall who haven't legitimately done bad things. I'm sorry you've had the experience you've had, but that doesn't mean that everyone above you was some big blue wall willing to protect folks who've done wrong.


heck, you're in the ship too. I'll take all the retalitation if I get to keep breathing. If they wanna kick me out over saving my own skin, fine. Saves me from deserting.

The US Navy has over 300k active-duty personnel. I suppose it's easier to just go somewhere else where no-one knows who you are.

The person ignoring their subordinate’s reports to protect their own next promotion has entered the chat.

The author mentioned 10^20 combinations taking millions of years, but a modern server GPU can put out 10^15 computations per second (about a petaflop), assuming you use them in FP16 mode. Keep in mind that 65K divisions of a box one meter per side is 15 micrometers! This means that roughly speaking, it would be possible to brute-force through all possibilities in about 10^5 seconds, which is just one day. It helps that this type of algorithm is almost all computation with very little data transfer to and from main memory, and is "embarrassingly parallel".

Some light optimisation such as utilising symmetries to reduce work, combined with multiple GPUs in parallel could bring this down to hours or even minutes.

It would be a fun thing to cloud-host, spinning up spot-priced GPUs on demand.

Similar brute-force tricks can be applied to other NP-hard problems such as optimising electronic circuit board wiring, factory floor planning, etc...

The ridiculous amount of compute we have available to us in a GPU reminds me of this quote from Stargate Atlantis:

JEANNIE: The energy you'd need would be enormous to the point of absurd.

McKAY: Absurd we can do. We have something called a Zero Point Module which essentially does what we're attempting on a smaller scale -- extract energy from subspace time.


The program taking a million of years is hyperbole but spinning up a GPU cluster for this can't possibly be an effective use of time.

That's your intuitive take on it, because it feels wasteful somehow, but it boils down to simple dollar terms. If for a few hundred dollars worth of compute you can make a box smaller, that might save the company hundreds of thousands of dollars over years.

This seems very unlikely considering how people would have different things they wanted packed

Doubles are way overkill. Using something like 16 bit integers per channel is adequate even for HDR.

I've heard of a philosophy that instead of a million unique tags that do mostly nothing, modern HTML should use only <div> and <span> instead.

IMO it's fair to say to colleagues that you're free to use any HTML tags as long as you take responsibility for them by knowing all of their idiosyncrasies.

Then it wouldn't be HTML.

It'd be a weird structural language you could really only express whether something is a block or inline.


Yep. <div>, <span>, and JavaScript. If you actually want a simpler web stack, that's the way to do it.

Unfortunately you can't draw a table of number using just divs. It's just too slow and your users will notice. For tables of data, you have to use tables.

I'm sure it's true for lots of other elements.


You can, you just have to set their "display" properties [0] to "table", "table-row", etc. The whole point of CSS is to divorce styling from the particular HTML tags.

[0] https://developer.mozilla.org/en-US/docs/Web/CSS/display


If you're going that minimalist, then why even have two tags (div, span) in your toolset when their only difference is display:block vs display:inline



It’s not at all silly. There’s some nice visualisations[1] of GR in YouTube that look like space is being swallowed up by matter.

A toy model I like to use in my mind is that matter absorbs spacetime. It is literally sucked in!

A possible extension of this model is that the tension introduced in the vacuum causes it to stretch out. That could potentially explain the non-r^2 terms in galactic rotation curves.

[1] https://youtu.be/DYq774z4dws?si=6vDWZ8jPzgjxSBb1


They stopped paying their outsourced SIM Card burning and shipping vendor. This isn’t a state in their support database, which dutifully queued up requests as-if everything is fine. Eventually they paid their vendor and the requests got popped off the queue.

That sounds pretty believable. I would imagine somebody in customer support would have noticed it sooner than a month and a half if all of the customers weren’t getting sim cards.

Based on my experience with corporate accounting, this is all too likely.

Or they were out of stock of a component, but same idea.

for sim cards? thats like running out of dirt.

Sure, but any supply that is consumed can be exhausted locally, including dirt.

Maybe the restock was in the .0006% of containers that fall off ships yearly! Who knows.

I’m not even arguing that this is what happened here; just that a lifetime in ops and logistics has taught me there is a steady, non-zero failure rate.


> the post is about them personally.

There is a decent chance that, yes, this rant is quite literally aimed at the people that frequent Hacker News. Where else are you going to find a more concentrated bunch of people peddling AI hype, creating AI startups, and generally over-selling their capabilities than here?


on linkedin, for starters

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: