Hacker News new | past | comments | ask | show | jobs | submit | more yelling_cat's comments login

I'd always assumed my cat drools when she gets a good knead on simply because she's purring so hard, but it makes sense that it's related to nursing behavior.


That was the killer BBEdit feature for me, back when I was building my first static web sites. Making precise global layout changes without using a web template system felt like magic.


For six months Siri interpreted my requests to call anyone as "Call Paul." It still might, but at that point I gave up and changed Paul's name in Contacts to stop accidentally bothering the poor guy.


What snares a lot of well-meaning music fans into wasting money on audio woo is that the first upgrades can be mind-blowing. Let's stick with headphones for simplicity. Pairing, say, a $400 set with a well-mastered album you've listened to hundreds of times and you'll discover sounds in it you've never noticed before - the singer's breathing, whispered background vocals, a guitar string's squeak. It even feels like each instrument is playing from physically distinct places! If the headphones are wired you might next add a $90 DAC/amp combo and - wow, you really feel the bass and sub-bass now! So what will take you to the next level?

The honest answer is that absolutely nothing will provide improvement like what you've just experienced. Tripling your headphone budget gets you a number of beautiful, well-engineered cans with sound signatures tailored to your favorite sorts of music, but with improvements on the order of "that midrange has more detail, nice" than "holy crap!"

Woo-selling companies target anyone still convinced at that point there is some next, higher level of audio bliss they can reach with the perfect combination of kit. Cables made from exotic materials, fancy power conditioners, rocks and stickers have glowing "reviews" and marketing pseudoscience about sonic transparency and skin effect. They do nothing and give the audiophile industry a bad name, but they'll be with us until suckers stop paying for them.


They won't be snagging professionals with this, and in this specific case I think that's fine.

I expect most of the people who'd fall for it are young or immature people, trying to get back at someone who beat them in a game or argued with them on social media. For whatever reason many of these folks see DDoSing, sending death threats and even swatting as "pranks" instead of crimes. A friendly reminder that doing this stuff can get them in serious trouble could nip that behavior in the bud before something tragic happens.


But the same systemic weakness that enables Swatting can be exploited here. Specifically that the government assumes good faith. Instead of sending a SWAT team to your house I can sign up for a DDoS in your name.


I'd like to think that the investigation would be more sophisticated than just see what name is on the ddos request.


You have far more faith in police than I do


Apart from people (hopefully) not using their real names to make the ddos request, I would guess the investigation is done by a tech department rather than non specialist officers.


You have far more faith in police than I do

We have nontechnical people making legislation about technical things, why do you think police are any different


I don't think non technical people could pursue the investigation at all. I'm technical but not in that specialism so I'd have to do some studying just to get started.

Do you really imagine a patrol cop gets given a computer and told to 'find the suspect'? The legislators have someone else write what they put their name to, so that's not comparable.


>patrol cop gets given a computer and told to 'find the suspect'?

No, just one that got promoted to detective


And then you'll get a warning from the police? While not ideal, that's hardly the same as a potentially fatal swatting


Depends entirely on how the police reacts, but it could as well lead to them confiscating all of your computers and putting you in a jail.

Of course, swatting is worse. An on-demand terrorist attack by phone call is hard to top. But this one can be pretty bad too. Well, or maybe not, because it's not the starting evidence that makes it bad.


It doesn't have to be fatal to be bad.


Assuming the legal system uses it as a teaching exercise. For some reason I feel like it's going to be used to throw the book at people who would be better served by guidance / opportunities instead.


From what I've heard on DarkNet Diaries, the UK courts seem quite good at picking up intelligent youngsters involved in hacking and giving them a chance to move into cybersecurity.


The UK has a few pretty good schemes (e.g. Cyber Prevent) that try to intervene and stop young people before they get landed with a criminal record for (lower level) cyber crime - at one point the average age at time of arrest was 17.


>before something tragic happens.

Gotta be USA.


Credit Suisse is facing liquidity issues and tarnished credibility after a rash of scandals and screwups, see https://www.reuters.com/business/finance/spies-lies-chairman... for a summary. From your link it looks like their finance reporting in recent years is now under question as well. It doesn't look related to what happened at SVB or Silvergate.


My first PC was a Micron. I don't remember the specs at this point but the machine was fast, every component in it was well-chosen and standard, and the case was roomy and easy enough to get into that I was still using it long after swapping out all of the original components.

The best thing about Micron, though, was the support. Micron had no Level 1 "techs" reading replies from a script on staff, no annoying phone trees to navigate, and no obnoxiously long hold times. You'd just call and within a few rings be speaking directly to a Level 2 or 3 tech who knew both hardware and Windows troubleshooting and treated you as a peer. Discovering how aggressively awful support from most tech companies is after working with Micron was disheartening.


My parents got a 486 from Gateway 2000, but the first PC I bought and owned personally was a Micron. Full tower case, Pentium 2-233, 6.4GB hard drive, S3 Virge DX video, USB (that being kind of a new thing at the time), 24x CD-ROM, I think either 16MB or 32MB of RAM, 17 inch CRT monitor. I think it had a mix of ISA and PCI slots. I don't remember if AGP was a thing yet. Zip drive and 3.5" floppy. Actual serial and parallel ports. PS2 ports for keyboard and mouse I think. It probably had one of those MIDI/joystick ports, but maybe things had moved to USB by then. I don't remember if it came with a 56k modem, but if not I think I added that later. Also a 10-baseT ethernet card. Probably an NE-2000. It came with Windows95, but I also dual-booted to Redhat 5.0 and for awhile also ran BeOS.

It was quite a machine at the time, and cost me about $3,000. I think the next time I upgraded it was to an AMD system with a clock speed around 1400 MHZ, but I also kept using the Micron case for quite a long time after I had replaced almost all of the original hardware.

I don't think I ever had to call Micron support.

What a weird time that was. Computers were really terrible, but the amount of rapid progress and competition between big companies and the rise of open source software and the Internet were all really exciting and inspiring. Like there was always something amazing right around the corner. These days I can't really immediately distinguish a computer manufactured yesterday from one that's 10 or 15 years old.


And there's no gotcha there, Google's always been open that retrieval fees are the tradeoff for Coldline being otherwise so cheap. Nearline is the better option for data you'll access more once a quarter.


"so cheap" might be a bit overly relative in this usage..


I guess the only time you would use Coldline is if you rarely access the data, but when you do access it, a retrieval delay is unacceptable. If you access it frequently, use a cheaper retrieval tier; if you can tolerate a delay, use Glacier or GCP's Archive tier.


> if you can tolerate a delay, use Glacier

S3 recently added a Coldline-like "Glacier Instant Retrieval" class, FYI. Their "Deep Archive" class (the cheapest) still does require restore operations that take hours to complete, though.

https://aws.amazon.com/s3/storage-classes/glacier/instant-re...

> or GCP's Archive tier

AFAIK, the GCS Archive tier has the same availability characteristics as Coldline and the same latency as all GCS class (10s of milliseconds). It seems like the primary factor for how you'd choose a GCS storage class would be your cost projections based on how long you store objects for and how frequently you access them.

https://cloud.google.com/storage/docs/storage-classes#archiv...


> AFAIK, the GCS Archive tier has the same availability characteristics as Coldline and the same latency as all GCS class (10s of milliseconds).

That's true as far as I know. And the combination of "the data is accessible instantly" with "there is no low priority retrieval, there is only the ultra expensive kind" implies that their architecture is extremely weird and/or some very manipulative pricing is happening.


Neither. It only means that hot and cold storage live right next to each other and pricing is used as a carrot/stick, as well as a signal, so that the two classes tend to stay the same (mainly to prevent cold from getting hot, of course) and a desirable balance is preserved. You could call it manipulative, if you want, but it's more like incentives.


Amazon can apparently get data cheaply out of cold storage with a day or so of warning.

Can google not do that? And if they did, they could still maintain the same kind of tiering.

If they can't do it, that's a very strange system.

If they refuse to do it, that's manipulative.

For archival storage Google's charging three and a half years of storage costs to retrieve data. Even in the most flattering scenario where roughly all of the cost is in doing the I/O, and disk space is "free", that implies that retrieval is at least 75% profit. If I/O is half the cost of storage and disk space is half the cost of storage, then retrieval approaches 90% profit.


If you're doing disaster recovery, do you really want to wait a day for your data? I don't understand why you want Google to slow things down. Why can't Amazon get cold data faster? It's fairly simple: all Google storage is online. There is public material on how it all works. Maybe you can ask them to add an artificial delay. There is offline storage, yes, but that's tape, which has bigger issues...

Your pricing model is flawed, because you're not taking into account other factors such as rebalancing. Years ago, I would have loved for retrieval to be that cheap to perform behind the scenes.


> If you're doing disaster recovery, do you really want to wait a day for your data?

I'd like to have the option.

The point isn't that I want to wait, it's that I want it to be super low priority to make it cheaper.

For a super low priority job, why should reading be multiple times as expensive as writing?

> Your pricing model is flawed, because you're not taking into account other factors such as rebalancing. Years ago, I would have loved for retrieval to be that cheap to perform behind the scenes.

I don't understand. If rebalancing happens behind the scenes, then that has to get paid for as part of storage.

Which means the cost of 1 single I/O is a smaller fraction of the storage cost.

Which means the profit margin for retrieval is significantly higher than my estimate.

My cost estimate uses the most flattering possible case for the retrieval pricing. Any storage costs I didn't account for, deliberately or accidentally, make my argument stronger.


> For a super low priority job, why should reading be multiple times as expensive as writing?

Because the particular aggregate mix of storage helps drive down overall costs. Changing that mix affects the entire stack, as well as capacity planning. Ok, make cold storage very cheap to retrieve. What happens now? Everybody will buy that and abuse it for more demanding applications, with quality of service for latency sensitive traffic going down the toilet. So you end up throwing more resources at the problem and/or charging more across the board. Pricing is one of the few factors that users really pay attention to in the real world, not best practices. Unfortunately.

Furthermore, to implement what you want, you can keep a request open for hours, which causes issues all over the stack (where do you keep that state? How does that interact with load balancers?) or you mark the cold object and return temporary failures until it's finally retrievable. That's extra state and extra complexity that doesn't exist right now. Those extra costs would have to be recouped somewhere.

> I don't understand. If rebalancing happens behind the scenes, then that has to get paid for as part of storage.

Why? Rebalancing doesn't happen in a vacuum. It's linked to the traffic mix. You can't look at just the total bytes used in a cluster and figure how many HDs, SSDs, CPUs, RAM and NICs you need to serve that data while still meeting your SLOs. Unless it's a W/O cluster, you need more signals. Amount and behavior of cold vs hot storage are two of those.

Anyway, cold storage that warms up most likely requires extra rebalancing that wouldn't have happened otherwise. How would you price that? Who would you charge?

Again, your cost estimate for retrieval does not take into account how things actually work. Rebalancing is not purely a storage cost. Yes, your argument is strong, but only if you start from flawed assumptions.


> Because the particular aggregate mix of storage helps drive down overall costs. Changing that mix affects the entire stack, as well as capacity planning. Ok, make cold storage very cheap to retrieve. What happens now?

It doesn't have to be very cheap. Let's start with just trying to match the price of writes. That shouldn't really affect the total amount of I/O, and there's no reason reads should be harder on the system than writes.

> Furthermore, to implement what you want, you can keep a request open for hours, which causes issues all over the stack (where do you keep that state? How does that interact with load balancers?) or you mark the cold object and return temporary failures until it's finally retrievable. That's extra state and extra complexity that doesn't exist right now. Those extra costs would have to be recouped somewhere.

I suppose. But the cost of keeping a request open should be much much less than the current cost of having everything fully accessible in milliseconds.

> Anyway, cold storage that warms up most likely requires extra rebalancing that wouldn't have happened otherwise. How would you price that? Who would you charge?

Reads that cost a significant amount of dollars each don't require rebalancing. I'm not suggesting they go so cheap that rebalancing is required. You'd still do only one read to a completely separate hot storage system, like it currently works.


Sir, we live in a capitalist economy. Prices don't mirror costs.


My complaint is that the competition is awful here.

Capitalism is supposed to pit companies against each other and drive profit margins below 50%. And it's failing to do that here.

And moreso, it's a very cruel pricing system because it lures you in with low numbers, then overcharges to get your data back when you need it. Being antagonistic to your customers does have negative effects in the long run. And I think it's worth pointing out situations like that when people are shopping around.


I like fast sedans and find the Giulia to be one of the few really attractive cars sold these days, so when the high-end Quadrifoglio version came to the States I put my misgivings about Alfas aside long enough to strongly consider one as my last ICE car. I came to my senses after at least two of the prominent reviews at release described going through multiple vehicles as their initial review cars died. Car and Driver's 40,000-Mile Wrap-Up of their experience with the car (https://www.caranddriver.com/reviews/a23145269/alfa-romeo-gi...) said the QF "broke their heart" and lists a litany of issues, with the car out of commission for 80 days out of the 14 months they spent with it. They did say the car's an absolute blast to drive when it actually works, at least.


There aren't many reasonable use cases for sending a fax these days but this is definitely one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: