Bandwidth and processing are substantial bottlenecks with SAR; Only targeted and stationary applications have been broadly useful so far, and more focus has been put on planes than satellites for this. SAR is not as simple as taking a static image with a fixed resolution, your sensing window has got a target velocity and distance in mind and the antenna and processing needs to be tuned for that.
I would think that medium and high orbit optical tracking (daytime, cloudless sky) is probably used, because with video you can reasonably track subpixel targets if they're high contrast, without a lot of data transmission requirements.
> Bandwidth and processing are substantial bottlenecks with SAR; Only targeted and stationary applications have been broadly useful so far, and more focus has been put on planes than satellites for this.
I'm not sure why you assume this, this is factually incorrect. Satellite based SAR has been successfully used for civilian ship detection applications (traffic management, illegal fishing, smuggling detection, etc) for over three decades. I am sure its military use goes back much further.
> SAR is not as simple as taking a static image with a fixed resolution, your sensing window has got a target velocity and distance in mind and the antenna and processing needs to be tuned for that.
No? SAR satellites take thousands of SAR images of stationary scenes every day. It's true that object motion in the scene introduces artifacts, specifically displacement from true position - this is often called the "train off track" phenomenon, as a train moving at speed when viewed with SAR from the right angle will look like it's driving through the adjacent field rather than on the track. However, this isn't a significant problem, and can actually be useful in some situations (eg: looking at how far a ship is deflected from its wake to estimate its speed).
40 years ago the USN was working on using SAR with a elliptical kalmann filter to detect _submarine_ wakes. I assume things haven't digressed since then.
There are enormous piles of money looming around every corner seeking a return on investment. If you have users that are enjoying a service, one of those piles of money can buy out the owner, double the price, implement ads, and sell all the private data. The bet they are making is it will take longer for the userbase to quit than it will take to make back their investment.
Every popular / beloved service is a target for these giant piles of cash. The fact that lots of people like it is de facto proof that it's underpriced, or over-resourced, or coddles its users with too much content. According to the finance industry, a stable business relationship should have the userbase reluctantly concluding that they have no other option, gritting their teeth and opening their wallet - and that's the sort of maximally profitable entity that a giant pile of cash will leave alone, letting it just exist, as a business.
It's a matter of market size and the inherent goal of the market; The factors are implicitly at odds.
A small market is not at all efficient - it's unlikely to incorporate available information to attain accuracy.
A large market invites manipulation of the event itself - it's auto-corrupting. If ball players make $1M/year and there are easy opportunities to throw a game and make $30M anonymously, then you can expect that the game you're seeing isn't legit.
Arguably (go watch the show 'Billions') this is a big part of how the stock market works as well - insider information is rampant and overwhelming in profitability, and if you believe POSIWID ( https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha... ) ... you're probably not doing a whole lot of trading as a personal investor based on publicly available information.
Write endurance of the drive would be measured in TBW, and TLC flash kept adding enough 3D layers to stay cheap enough, quickly enough, that Optane never really beat their pricing per TBW to make a practical product.
I have to wonder if it isn't usable for some kind of specialized AI workflow that would benefit from extremely low latency reads but which is isn't written often, at this point. Perhaps integrated in a GPU board.
Optane practical TBW endurance is way higher than that of even TLC flash, never mind QLC or PLC which is the current standard for consumer NAND hardware. It even seems to go way beyond what's stated on the spec sheet. However, while Optane excels for write-heavy workloads (not read-heavy, where NAND actually performs very well) these are also power-hungry which is a limitation for modern AI workflow.
You're conflating two things. Yes, Optane would survive more writes. But it wouldn't survive more TBW/$, because much larger flash drives were available cheaper. Double the size of the drive using identical technology, and you double TBW ratings.
This was very clearly not true at the time for the actual implied TBW figures of even a tiny Optane drive, and is not even true today when you account for the much lower DWPD of TLC/QLC media. Do the math, $1/GB vs $0.1/GB where the actual difference in DWPD per spec sheets is more like two or three orders of magnitude, with the real-world practical one being quite possibly larger. (Have people even seen Optane actually fail in the wild due to media wear out? This happens all the time with NAND.)
Yes it would, by an almost arbitrarily large margin. You can test this out for yourself. Overwrite one of each in an endless loop. Whenever the flash based drive fails, replace it and continue. See how long it takes for the optane to fail.
You should be able to kill a typical consumer flash drive in well under a week. Even high end enterprise gear will be dead within a couple of months.
The extra capacity of modern SSD is a good point, especially now that we have 100TB+ SSD.
But Optane still offered 100 DWPD (drive writes per day), up to 3.2TB. Thats still just so many more DWPD than flash ssd. A Kioxia CM8V for example will do 12TB at 3 DWPD. The net TBW is still 10x apart.
You can get back to high endurance with SLC drives like the Solidigm p7-p5810, but you're back down to 1.6TB and 50 DWPD, so, 1/4 the Intel P5800X endurance, and worse latencies. I highly suspect the drive model here is a homage, and in spite of being much newer and very expensive, the original is still so much better in so many ways.
https://www.solidigm.com/content/solidigm/us/en/products/dat...
You also end up paying for what I assume is a circa six figure drive, if you are substituting DWPD with more capacity than you need. There's something elegant about being able to keep using your cells, versus overbuying on cells with the intent to be able to rip through them relatively quickly.
We don't care about (TBW/TB) at the consumer level, we care about (TBW/$), and 3D TLC was far, far cheaper per TB, so much so that TBW/$ was not a numerical advantage of Optane.
That left ONLY the near-RAM-read-latency, which is only highly beneficial on specific workloads. Then they didn't invest in expanding killer app software that could utilize that latency, and didn't drop prices sufficiently to make it highly competitive with big RAMdisks.
In 2018, with Optane drives launching around $1.50/GB and TLC flash drives around $0.15/GB, it wasn't that much cheaper. As far as I'm aware Optane had a lot more than 10x the endurance.
If you have a broken system whose injustice is checked only by the limitations of the human elements, and you start replacing those human elements and powerscaling them, you have an unlimited downside.
I don't think I would favor executions or anything.
But forcible dilution (partial or total seizure) of the corporation? A mandatory insurance coverage? Absolutely.
We already have statutory HIPAA violation penalties, and I am extremely in favor of assessing them in a breach. The question is whether they are sufficient.
https://www.youtube.com/watch?v=-GTpBMPjjFc is a good overview of what's up there so far, and what's coming as they really try to scale the technology.
Bandwidth and processing are substantial bottlenecks with SAR; Only targeted and stationary applications have been broadly useful so far, and more focus has been put on planes than satellites for this. SAR is not as simple as taking a static image with a fixed resolution, your sensing window has got a target velocity and distance in mind and the antenna and processing needs to be tuned for that.
I would think that medium and high orbit optical tracking (daytime, cloudless sky) is probably used, because with video you can reasonably track subpixel targets if they're high contrast, without a lot of data transmission requirements.
reply