Hacker News new | past | comments | ask | show | jobs | submit | dmattia's comments login

Disclaimer: I used to work on Google Search Ads quality models

> Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.

This wasn't the experience I saw. Google is intentional about which data from which products go into their ads models (which are separate from their other user modeling), and you can see things like which data of yours is used in ads personalization on https://myadcenter.google.com/personalizationoff or in the "Why this ad" option on ads.

> and it’s very much not necessary or even a sound business idea for them to do something else

I agree that Apple plays into privacy with their advertising and product positioning. I think assuming all future products will be privacy-respecting because of this is over-trusting. There is _a lot_ of money in advertising / personal data


Say I have a pico cluster with a few service nodes and a few upstream clients register themselves, and then I deploy a new version of the service nodes where all existing service nodes are taken down and replaced.

Can the client still talk to the service nodes? Is this over the same tunnel, or does the agent need to create a new tunnel? What happens to requests that are sent from a proxy-client to the service nodes during this transition?

Or at a much higher level: Can I deploy new service nodes without downtime?


When Pico server nodes are replaced, the upstreams will automatically reconnect to a new node, then that node will propagate the new routing information to the other nodes in the cluster

So if you have a single upstream for an endpoint, when the upstream reconnects there may be a second where it isn't connected but will recover quickly (planning to add retries in the future to handle this more gracefully)

Similarly if a server node fails the upstream can reconnect


Looks like there is from the instructions in the getting-started guide: https://github.com/andydunstall/pico/blob/main/docs/getting-...


The instructions there say that it will create a cluster with three nodes, so while it is using docker compose I am guessing it is still using kubernetes



The compose file is in the demo folder, you don't need to guess what it's going to do. https://github.com/andydunstall/pico/blob/main/docs/demo/doc... Looks like it's spinning up three replicas of Pico


The three nodes are just three containers on the host where you're running docker compose. Docker compose only works with a single host except when deploying to docker swarm clusters. I'm not familiar with swarm though so I couldn't tell you what versions of compose support it and how good that support actually is.


Thanks for confirming! Could you maybe update https://supabase.com/docs/guides/storage/s3/compatibility to include s3-esque presigning as a feature to track?


I had a dumbphone for two years, it was fine.

For the past two years though, I've been using just an Apple Watch, which I was able to connect my old phone number to. It has maps, texting, calling (works best via bluetooth), weather, heart rate monitoring, alarms, email, sports scores, and some music apps. When attached to my wife's phone plan, it costs me $5 per month for all service.

I think the unfortunate reality of dumbphones is that of the folks searching for dumbphones, we all have fairly specific ideas on which features we want and which ones we don't, but there are only like 5 reasonable options available, and most don't hit the mark for many of us. If you want good maps, that rules out many. If you want a camera, that would rule out the watch like I use. If you want reasonable texting ergonomics that isn't speech-to-text, that rules out pretty much all of them


I think a watch is indeed a better 'companion' for people that want to detox from screentime. I do find Apple is hesitant to make it act as a primary driver. I find myself needing my iPhone too much for my liking. Hopefully this will change in the future.

I want the connectivity, I just don't want to... - carry 2 devices. - have a media consumption device with me all the time


I imagine the profit on a Watch dwarfs that from an iPhone, so they're leery to kill the golden goose.

And they've prioritized iPhone >> MacBook, so they're likely not interested in encouraging watch + laptop uses.


Can you tell me more about your watch use? I wanted to do something like this, but I got the impression that even if you get an Apple watch with a SIM card, you still would practically need an Apple phone paired with it.


Yeah for sure! So you sort of need a phone paired with it, but only as in you need a family member (or someone on your phone plan) to have an iPhone, but the watch owner themselves doesn't actually need any other phone that the watch.

It's called "Family Setup": https://support.apple.com/en-us/109036, and it lets me use the watch in what is sometimes called "Standalone mode" or "Companion mode" in different apps.

If you go down this route, know that many apps in the app store will not be downloadable, as they don't support "Companion mode", but I can still get everything I personally needed just fine.

As far as the SIM goes, I think I'm using an electronic sim card? I'm not positive


+1, my device is unfortunately too old to upgrade to iOS 17 and I wouldn't imagine this app would use too many new features


My understanding is that your docker image must have the lambda runtime interface client installed on the image in order to work.

It's not a huge step usually to add the RIC, but it's a bit more tied in to AWS than CloudRun is, which can run arbitrary docker images, if I understand.


That's right - you have to package awslabs/aws-lambda-web-adapter into your docker image which proxies the API-GW/ALB requests through.


I'm using Pulumi in production pretty heavily for a bunch of different app types (ECS, EKS, CloudFront, CloudFlare, Vault, Datadog monitors, Lambdas of all types, EC2s with ASGs, etc.), it's reasonably mature enough.

As mentioned in the other comment, the most commonly used providers for terraform are "bridged" to pulumi, so the maturity is nearly identical to Terraform. I don't really use Pulumi's pre-built modules (crossroads), but I don't find I've ever missed them.

I really like both Pulumi and Terraform (which I also used in production for hundreds of modules for a few years), which it seems like isn't always a popular opinion on HN, but I have and you absolutely can run either tool in production just fine.

My slight preference is for Pulumi because I get slightly more willing assistance from devs on our team to reach in and change something in infra-land if they need to while working on app code.

We do still use some Pulumi and some Terraform, and they play really nicely together: https://transcend.io/blog/use-terraform-pulumi-together-migr...


In general, one of the goals of microservices should be that if one of the five services goes down, the other four should be able to operate in some capacity still.

In practice, this can make the math quite a bit messier, but I don't think it necessarily has been worse overall from my perspective.

So instead of having your system be up or down 99% of the time in a monolith, you'll have it fully up 95% of the time (using your numbers), but of that 5% of downtime, 20% of the time one of your products will be running slowly, or 10% of the time some new feature you launched won't work for specific customers in some specific region, etc.

At my company it makes things like SLA/SLO guarantees for "our services" pretty complicated in that it's hard to define what uptime truly means, but overall I think the five microservice approach, when done well, should have less than 1% of complete downtime, at the cost of more partial downtime


> In general, one of the goals of microservices should be that if one of the five services goes down, the other four should be able to operate in some capacity still.

This is an excellent point, but what brought this to my mind was that the microservices in the Netflix article I don't think have this property. It looks to me if any of the VIS, CAS, LGS, or VES go down, then the whole service is effectively down.

Indeed, in my own career what I've seen is that if one microservice goes down the user won't be seeing 500 errors or friends, but the service will be completely useless to the user. You've just gone from a hard error to a spinning load icon, which might in fact be an even worse user experience.

It could be argued that this is just "you're doing microservices wrong", but then we start getting into no true Scotsman territory.


> Indeed, in my own career what I've seen is that if one microservice goes down the user won't be seeing 500 errors or friends

Exactly what it does is that first few hours of triage call goes with people claiming "well my service is up and issue is somewhere else". So find which service failed itself take crucial hours instead of fixing the failing service.

But in a world where Micro Service Incident Commanders can pinpoint failing a service among 1000 micro service within seconds on their vast 80 inch monitoring consoles and direct resolution admirals to fix in next 15 mins. It might just all work fine.


the problem comes when it's a distributed system, and it's the interaction between multiple systems that's causing the problem, and not a specific microservice being down. something got upgraded and the message size changed in an unexpected and incompatible way that worked fine in testing.


> It looks to me if any of the VIS, CAS, LGS, or VES go down,

But the whole point is that by splitting it into micro-services you can efficiently and optimally scale each component individually. So it's extremely rare that VIS for example would entirely go down. And because Netflix has tools like Hystrix if one instance is unavailable it will seamlessly route to another one.

And Even if you push bad code there are techniques like blue/green and canary releases which can be used.


Some serverless use cases work like you say, but Docker-based options such as AWS ECS, Docker-based Lambda functions, or Kubernetes would all commonly make use of compiled options


No, you won't use compiled option in k8s or any other container based deployment.

You would use deno as base layer, your dependencies in another layer, your code is in the last layer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: