Hacker Newsnew | past | comments | ask | show | jobs | submit | AnimeLife's commentslogin

looks like very few get it right. A good system would have few minutes of blip when one cloud provider went down, which is a massive win compared to outages like this.


They all make it pretty hard, and a lot of resume-driven-devs have a hard time resisting the temptation of all the AWS alphabet soup of services.

Sure you can abstract everything away, but you can also just not use vendor-flavored services. The more bespoke stuff you use the more lock in risk.

But if you are in a "cloud forward" AWS mandated org, a holder of AWS certifications, alphabet soup expert... thats not a problem you are trying to solve. Arguably the lock in becomes a feature.


Lock-in is another way to say "bespoke product offering". Sometimes solving the problem a cloud provider service exposes is not worth it. This locks you in for the same reasons that a specific restaurant locks you in because its their recipe.


Putting aside outages..

I'd counter that past a certain scale, certainly the scale of a firm that used to & could run its own datacenter.. it's probably your responsibility to not use those services.

Sure it's easier, but if you decide feature X requires AWS service Y that has no GCP/Azure/ORCL equivalent.. it seems unwise.

Just from a business perspective, you are making yourself hostage to a vendor on pricing.

If you're some startup trying to find traction, or a small shop with an IT department of 5.. then by all means, use whatever cloud and get locked in for now.

But if you are a big bank, car maker, whatever.. it seems grossly irresponsible.

On the east coast we are already approaching an entire business day being down today. Gonna need a decade without an outage to get all those 9s back. And not to be catastrophic but.. what if AWS had an outage like this that lasted.. 3 days? A week?

The fact that the industry collectively shrugs our shoulders and allows increasing amounts of our tech stacks to be single-vendor hostage is crazy.


> I'd counter that past a certain scale, certainly the scale of a firm that used to & could run its own datacenter.. it's probably your responsibility to not use those services.

It's actually probably not your responsibility, it's the responsibility of some leader 5 levels up who has his head in the clouds (literally).

It's a hard problem to connect practical experience and perspectives with high-level decision-making past a certain scale.


This is the correct answer


> The fact that the industry collectively shrugs our shoulders and allows increasing amounts of our tech stacks to be single-vendor hostage is crazy.

Well, nobody is going to get blamed for this one except people at Amazon. Socially, this is treated as as a tornado. You have to be certain that you can beat AWS in terms of reliability for doing anything about this to be good for your career.


In 20+ years in the industry, all my biggest outages have been AWS... and they seem to be happening annually.

Most of my on-prem days, you had more frequent but smaller failures of a database, caching service, task runner, storage, message bus, DNS, whatever.. but not all at once. Depending on how entrenched your organization is, some of these AWS outages are like having a full datacenter power down.

Might as well just log off for the day and hope for better in the morning. That assumes you could login, which some of my ex-US colleagues could not for half the day, despite our desktops being on-prem. Someone forgot about the AWS 2FA dependency..


In general, the problem with abstracting infrastructure means you have to code to the lowest common denominator. Sometimes its worth it. For companies I work for it really isn't.


I think the problems are:

1) If you try to optimize in the beginning, you tend to fall into the over-optimization/engineering camp;

2) If you just let things go organically, you tend to fall into the big messy camp;

So the ideal way is to examine from time and time and re-architecture once the need arises. But few companies can afford that, unfortunately.


They have decades of data to map queries to pages. They also have access to crawl the pages in advance.


In a sense this is similar to what Amazon has been doing in few countries. Find top selling products, get them cheaper from somewhere, rebrand them, rank them higher and sell them. They don't need to invest in market research like their competetors, they get all data from Amazon.com

At big tech scale, this is clearly anti-compete and piracy IMHO.


Except in this case they still rely on people to create content to train their AI on.


But there is a big difference, llama is still way behind chatgpt and one of the key reasons to open source it could have been to use open source community to catch up with chatgpt. Deepseek on contrary is already at par with chatgpt.


Llama is worse than gpt4 because they are releasing models 1/50th to 1/5th the size.

R1 is a 650b monster no one can run locally.

This is like complaining an electric bike only goes up to 80km/h


R1 distills are still very very good. I've used Llama 405b and I would say dsr1-32b is about the same quality, or maybe a bit worse (subjectively within error) and the 70b distill is better.


What hardware do you need to be able to run them?


The distils run on the same hardware as the llama models they are based on llama models anyway.

The full version... If you have to ask you can't afford it.


Very interesting article. I don't get though for hotspot partitions they didn't use a cache like Redis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: