Hacker Newsnew | past | comments | ask | show | jobs | submit | rohan_'s commentslogin

Was the key unlock here the ability to append data to an object?

(https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3...)


There are a few things unlocked by Ursa:

1. It is leaderless by design. So there is no single lead broker you need to route the traffic. So you can eliminate majority of the inter-zone traffic.

2. It is lakehouse-native by design. It is not only just use object storage as the storage layer, but also use open table formats for storing data. So streaming data can be made available in open table formats (Iceberg or Delta) after ingestion. One example is the integration with S3 Tables: https://aws.amazon.com/blogs/storage/seamless-streaming-to-a... This would simplify the Kafka-to-Iceberg integration.


They were asking about changes that enabled Ursa itself.

Having built a prototype of a system like Ursa myself, this isn't something that you need to use at all, especially because it seems like this is only available in S3 Express One Zone.

Ursa is available across all major cloud providers (GCP, Azure, AWS). It also supports pluggable write ahead log storage. For latency relaxed workloads, we use object storage to get the cost down. So it works with AWS S3, GCP GCS, Azure Blob Store. For latency sensitive workloads, we use Apache BookKeeper which is a low-latency replicated log storage. This allows us to support workloads ranging from milliseconds to sub-seconds. You can tune it based on latency and cost requirements.

No, it was S3 becoming strongly consistent in 2020: https://www.infoq.com/news/2020/12/aws-s3-strong-consistency...

That’s probably not as useful as you think. Unless things have changed more recently, you need to set the offset from which to append, which makes it near useless for most use cases where appending would actually be useful.

i don't understand this product - i feel like tools like v0 can one-shot an analytics dashboard these days. i do think something like https://upsolve.ai/ provides real value though


Anecdotally - working with Azure has been hell on earth for me. Insanely unintuitive and buggy interface. Many cryptic errors preventing me from doing anything.


Microsoft doesn’t really respect their users because most of the users don’t decide by themselves to use Azure. Someone else make the decision for them. And it’s probably like that for many of their other products.


How does this not apply to - insert any product - in an enterprise space? With rare exceptions, users don't decide which software they use.


in the case of Azure, the users are the engineers tasked with implementing the infra

I'm not sure I've ever heard of a shop adopting Azure on pure engineering merit but my anecdata are hardly exhaustive. it tends to be forced for weird business reasons (retailers mistrusting Amazon, data residency requirements, sweetheart credit deal, CIO convinced by Azure rep over golf)


You’re right. It’s the embodiment of enterprise software sales. But some how AWS and GCP do it a bit better.


AWS and GCP started by engineers building products for engineers and later post success moved into enterprise sales (which AWS is doing well with, GCP not so much).

Azure came late and decided by decree that they needed a Cloud thing and so various business units came together and offered up a "strategy" for how they could re-brand and re-market what they had into a "unified offering".

And so you get things like Azure blob storage with fixed limits on performance per bucket. There's nothing cloud about it. Not so much leaky abstractions as a bucket of water labelled "cloud".


> AWS and GCP started by engineers building products for engineers and later post success moved into enterprise sales (which AWS is doing well with, GCP not so much).

I think that product managers are AWS' and GCP's unsung heroes. One of the best things about AWS is how in contrast everything is designed to integrate exceptionally well with everything in the AWS ecosystem, and all services are designed to be simple, kept simple, and kept backwards compatible even when subjected to major upgrades. Which are always seamless.

In contrast, can anyone explain why Azure has Table Storage but also Cosmos DB, and Cosmos DB is actually half a dozen storage services? Why isn't Table Storage also Cosmos DB, then? Table Storage shares SDKs with CosmosDB, too.

The same applies to messaging. You have Storage Queues, Service Bus queues, Event Hub, and Event Grid. Even when you ask Azure experts what's the difference between, say, Storage Queues and Service Bus Queues, the answer is never clear, simple, straight-forward, or understandable.

It's a mess, and those who have to deal with it are left to navigate this mess.


That probably explains why all enterprise software sucks.


Yes it sucks and the documentation sucks even more. I think azure and whatever you call the google cloud configuration site are both so complicated because they're best for giant corporations with thousands of employees and many many roles in the organization with different privileges. However if you're just a single developer setting up something simple it's hellish.

It would be nice if they provided a simple setup configuration option for simple setups.


Quite honestly a single person working at a micro scale is not the target market for the hyperscalers. You're better served not going for managed services (and buying unmanaged services on the big clouds also doesn't make sense without needing the entire ecosystem around it).


Is AWS any better? (Genuine question)


Not from my experience. I've worked with all three of them. If one can stick with the web UI to provision permissions and the permissions required are simple/straightforward, Google Cloud (again, this is my personal opinion, so please take it with a grain of salt) is the most usable among the three.

BUT all three of them (AWS, Azure and GCP) have pros and cons, so you just have to spend a good amount of time learning their quirks.


AWS IAM is very very well designed. They clearly have some sort of internal security & architecture review process that works.


AWS IAM isn't well designed, policy attachments all over the place making it near impossible to figure out what set of permissions you might actually have

and too much string to hang yourself with


As hair-splitting, its IAM can be well designed but the management tooling/observability can be lacking.

In my mind, "not well designed" would be "I cannot represent my security need in this policy language" versus the more common opinion "AWS IAM is a black box" means "I cannot understand why I am getting a 403"

GCP's IAM is, in this premise, not well designed because I cannot express ReadOnlyAccess in their policy language without having a constantly updating policy job, because they do not have wildcards in the Action as does AWS. Contrast `Action: ["s3:Get*"]` which would cheerfully include access to a future GetBucketV7 verb to GCP's "storage.buckets.get" which must always be specified, in full, and when storage2.buckets.get comes out, it's my job to find all references and patch them


There is similar issue with AWS. AWS provides a "ReadOnlyAccess" managed policy that has additional privileges that you probably don't want folks to have (e.g. can read S3 bucket content, not just see bucket names/key names). They recognized this and created a more limited "ViewOnlyAccess" that doesn't have access to content.

There's another common fix, which is to apply a permission boundary to IAM roles. This allows the use of generic policies like "ReadOnlyAccess" but can then be further downscoped to resources by tag (or other similar ABAC schemes)


You should not be using any of their managed policies, but creating your own. Using their own managed policies is a strong misunderstanding of how to use IAM.


Downvoting without discussion? That’s not critique, that’s cowardice. Tell me what is wrong about this factual statement.


(I used to work for AWS and am long Amazon stock. In no way do I speak for Amazon)

With Amazon, you are genuinely the customer. AWS may do many things in a bizarre or byzantine way, but it is constantly trying to be there for the developer in ways that many competitors in my opinion are not.


be there for the customer on AWS means adding another half baked config option that now all future users have to think about if they will need it


> Is AWS any better? (Genuine question)

It is, without any question. Even of you work at a Microsoft shop, the benefits you get from vertical integration isn't that clear. Azure requires a far greater cognitive load to handle and to make matters worse ultimately experiences far more outages.


Coming from a Windows enterprise background, the UI for the most part makes sense and not something I find difficult to navigate (the original UI was awful). I know your sentiment is not uncommon, but I'm unable to share it.

I will agree, and this is a general Microsoft problem spanning back to the 90s, some error messages aren't useful what so ever. Others are clear and concise. I figure this is due to the different PGs following their own set of rules.


> Anecdotally - working with Azure has been hell on earth for me. Insanely unintuitive and buggy interface. Many cryptic errors preventing me from doing anything.

What pisses me off the most about Azure is now they designed it as the 90's view of what a cloud provider is. With Azure you don't just provision a VM or God forbid a web service. No no no. You need to provision an app service plan first, where you have to provision what computational resources you allocate to it, and then assign services and even gasp function-as-a-service apps. And even with FaaS stuff you don't just provision a handler. No, that would make too much sense. First you need to provision a function app running on your service plan, and you provision whatever azure functions you need as part of the function app. How much accidental complexity is that? Can't I just deploy an app or God forbid a function?

The same think applies to storage, but it's even worse. You get your storage account, and you need to providion a storage account to be able to provision one or more blob storage containers, azure tables, even simple queues. But wait, you need a storage account to store data in a nosql services, but if you opt for the other nosql service then that's an entirely different thing. For that you can simply go ahead and create an account. You can use the same SDK for both? That's nice. Wait, why do they have two nosql services?

Azure, man. It exists to make every single alternative look good.


You can provision an Azure Web Service (PaaS web server running IIS or whatever the Linux version runs) which provisions the computational resource, Azure App Service, as part of the deployment steps.

You certainly can do it in the way you've specified but I only see that as useful if you're provisioning multiple Web Services to point to a single App Service.

But to answer your question, yes you can "just" provision a Function or Web Service, the wizard walks you through it. The App Service behind the scenes is just details and not something you must interact with post-Function creation.


> You can provision an Azure Web Service (...) which provisions the computational resource, Azure App Service, as part of the deployment steps.

That's not a solution because deployment steps aren't a problem. The brain-dead aspect of Azure is how it forces users to handle the complexity of having to deal with provisioning and budgeting what computational resources used to run a set of web apps. This doesn't even buy isolation. If I'm paying for cloud services, why on earth should I concern myself with how much RAM I need to share across N apps? It's absolutely brain dead.


You don't have to share anything across apps.

When I ran public sites each received it's own App Service, though they were provisioned via ARM template because that's what you do (or Terraform, etc) rather than get into the UI or manual CLI in an enterprise. All of these complaints you're bringing forth are a non-issue in a practical deployment.


> You don't have to share anything across apps.

You don't. You also do not have to share the same service plan with any other app service or function app. That's besides the point. The point is that Azure requires anyone who wants to run a god damned web service or even a single event handler to provision a bunch of infrastructure resources, just to be in a position to even consider deploying the thing.

I mean, you need to have both an Azure Service Plan and an Azure Storage Account to even consider deploying something serverless. Let that absurdity sink in.

In contrast, with AWS you just deploy the damned Lambda. That's it.

> (...) though they were provisioned via ARM template (...)

That is completely besides the point. It's irrelevant how any IaC offering turns any provisioning into a one-click affair. What's relevant is accidental complexity created by Azure for no reason at all. Go look at your ARM templates and count the number of resources you need to have there just to be able to run a single no-op event handler. It's stupid.


They have a Azure Function Serverless Offering called consumption plan: https://learn.microsoft.com/en-gb/azure/azure-functions/func...

Quote: „Default hosting plan that provides true serverless hosting“

This one doesn’t require an app service plan.

Actually I like that offering, depending on your requirements you have several options to host your functions. That’s pretty great.

If they would offer just one kind of function app or one kind of storage solution people would complain that their very important edge case is not supported. For those simple requirements you can use cloudflare, vercel etc…


> This one doesn’t require an app service plan.

It requires a plan. You need to know what a plan is and what plan your azure functions are running on. Is it a consumption plan? Or is it a flex consumption plan?

I mean, you can run multiple function apps on the same plan. As a developer, you are required to know which plan a particular function app is running on, and be aware of the implications.

You see how brain dead it is?


Yeah that, and in Azure OpenAI you have to create a separate deployment for each model you want to use.


You can just deploy a function.

You open vscode, install the Azure Functions extensions, walk through the wizard to pick your programming language and write the code. Then create and deploy it from vscode without ever leaving your IDE.


> You open vscode, install the Azure Functions extensions, walk through the wizard to pick your programming language and write the code. Then create and deploy it from vscode without ever leaving your IDE.

You are talking about something entirely different. Provisioning a function app is not the same as deploying the function app. How easy it is to upload a zip is immaterial to the discussion.


The vscode extension can both provisions the resource as well as deploy it.

Edit: And yes, it will create every resource it needs if you want to, except for the subscription.


> The vscode extension can both provisions the resource as well as deploy it.

On top of having to have an Azure subscription, you need to provision:

- a resource group

- a service plan

- a function app

You do not get to skip those with azure.

And by the way, the only time anyone uses vscode to deploy an app, or even visual studio, is to work on personal projects or sandbox environments. Even so, you use the IDE to pick existing resources to deploy to.


You're really trying, aren't you :-)

All of this can easily be automated/cloned if it is something you do often. An RG is a collection of (hopefully) related resources. Plans and the App are provisioned together in the web UI wizard if that's the route you take.


> You're really trying, aren't you :-)

I'm trying to educate you on the topic, but you seem to offer resistance.

I mean, I haven't even mentioned the fact that in order to be able to provision an azure function you are also forced to provision a storage account. As if the absurdity of the whole plan concept wasn't enough.

> All of this can easily be automated/cloned if it is something you do often.

Irrelevant. It's completely besides the point how you can automate deploying all those resources.

The whole point is that Azure follows an absurdly convoluted model that leaks forces users to manage many layers of low-level infrastructure details even when using services that supposedly follow serverless computing models. I mean, why on earth would anyone have to provision a storage account to be able to deploy an Azure Function? Absurd.


I've provisioned many Azure Functions apps; there's nothing you can educate me on, here.

Why do you care about a storage account so much?

https://learn.microsoft.com/en-us/azure/azure-functions/func...

Since you didn't know about the [Flex] Consumption plan, there's your education.

And as to why they require a storage account:

https://learn.microsoft.com/en-us/azure/azure-functions/stor...

Wallah, education!


Which is exactly the opposite of how to effectively manage applications, code, and change at any scale beyond a home project.


One thing I noticed about all of the public clouds is an insistence by small-scale users to avoid the user-friendly interface and go straight to the high scale templating or provisioning APIs because of a perception that that’s “more proper”.

You won’t get any benefits until you have dozens of instances of the same(ish) thing, and maybe not even then!

Especially in the dev stage it is perfectly fine to use the wizards in VS or VS Code.

The newer tooling around Aspire.NET and “azd up” makes this into true IaC with little effort.

Don’t overthink things!

PS: As a case in point I saw an entire team get bogged down for months trying to provision something through raw API calls that had ready-to-run script snippets in the docs and a Portal wizard that would have taken that team all of five minutes to click through… If they’re very slow with a mouse.


That was not the point. Parent was complaining how complicated provisioning and deploying through the Azure portal was.

At scale you'd IaC such as Bicep.


> That was not the point. Parent was complaining how complicated provisioning and deploying through the Azure portal was.

No, I wasn't. I was pointing out the fact that Azure follows an absurd, brain-dead model of what the cloud is, which needlessly and arbutrarily imposes layers of complexity without any reason.

Case in point: the concept of a service plan. It's straight up stupid to have a so-called cloud provider force customers to manage how many instances packing X RAM and Y vCPUs you need to have to run a function-as-a-service app, and then have to manage how that is shared with app services and other function apps.

Think about the backlash that AWS would experience if they somehow decided to force users to allocate EC2 instances to run lambda functions, and on top of that create another type of resource to group together lambdas to run on each EC2 instance.

To let the absurdity of that sink in, it's far easier, simpler, and much cheaper to just provision virtual private servers on a small cloud provider, stitch them together with a container orchestration service, and just deploy apps in there.


> Case in point: the concept of a service plan. It's straight up stupid to have a so-called cloud provider force customers to manage how many instances packing X RAM and Y vCPUs you need to have to run a function-as-a-service app, and then have to manage how that is shared with app services and other function apps.

You're not forced to, you can use a consumption plan.

https://azure.microsoft.com/en-us/pricing/details/functions/...


> You're not forced to, you can use a consumption plan.

Pray tell, what do you think is relevant in citing how many plans you can pick and choose from to just run a simple function? I mean, are you trying to argue that instead of one type of plan, you have to choose another type of plan?


The consumption plan is the default plan, so technically you don't have to choose anything, just go with the defaults.

But it disproves your point that you're "forced" to have an app service plan.

At this point you're simply arguing to argue after having been shown to be incorrect multiple times. Good luck.


> What pisses me off the most about Azure is now they designed it as the 90's view of what a cloud provider is. With Azure you don't just provision a VM or God forbid a web service. No no no. You need to provision an app service plan first

What's funny is you're completely backwards here. Microsoft has a much more modern view of the cloud than AWS where everything is a thin veneer over EC2. Azure started as PaaS first and AWS started as IaaS first and that fingerprint is still all over their products. Building everything in a VM is the most expensive and naive way to adopt the cloud. It's the main reason why complexity and costs blow up. You're building in the cloud wrong and somehow seemed to have missed that a consumption based Function app is the default option and doesn't require an App Service Plan.


> What's funny is you're completely backwards here. Microsoft has a much more modern view of the cloud than AWS where everything is a thin veneer over EC2. Azure started as PaaS first and AWS started as IaaS first and that fingerprint is still all over their products.

Irrelevant. I don't care about either history or revisionism. I care about deploying apps/functions. In AWS each lambda function is a standalone resource, whereas in AWS you need to 1) provisional an app service plan, 2) deploy a function app on said service plan, 3) deploy the actual function. It's nuts.

Same goes for storage. While in AWS you just go ahead and create a S3 bucket, on Azures you have to providion storage accounts and then provision a blob storage container.

> Building everything in a VM is the most expensive and naive way to adopt the cloud.

Azure is more expensive, harder to manage, even more impossible to estimate costs. Making claims about cost as if it makes Azure look good sounds completely crazy.


You lie about or cannot figure out basic things in Azure like creating a Function without an App Service Plan. I cannot take anything you say at this point seriously. You're just coming across jaded and spreading misinformation.


> You lie about or cannot figure out basic things in Azure like creating a Function without an App Service Plan.

I recommend you spend a few minutes going through an intro tutorial on Azure Functions. A key topic on Azure Functions 101 is the concept of a plan and how to pick a hosting option. You can start by reading this link:

https://learn.microsoft.com/en-us/azure/azure-functions/func...

Once you read this link, you'll be aware that even in their so-called serverless plan that follows a "serverless billing model" you still have a plan tucked away where you can run multiple function apps in if you really want to.

Even if you pretend this doesn't exist, you need to ask yourself what is a plan and what does it matter to you and why do you care. Do you think that picking a plan does not factor as a concern in Azure?


>Microsoft has a much more modern view of the cloud than AWS where everything is a thin veneer over EC2

You must be joking!

I was looking a various Container Registry products and looked up Azure's recently. It has the following limits (On the premium SKU!): 50Mbps upload, 100Mbps down

What sort of a cloud product has limits like this! What a clown show.


The footnote points out that those are minimum limits.

https://learn.microsoft.com/en-us/azure/container-registry/c...


As noted, those are minimum limits. If there's a clown show, it's you who is hosting it.


> As noted, those are minimum limits. If there's a clown show, it's you who is hosting it.

Do they specify ay SLA other than the minimums? If not, I'm sorry to tell you, but they only offer the minimum and anything over that is a pleasant surprise.


Don’t host your website on containers, that’s what workers are for


Aren't you limited with Workers ? Like would you be able to deploy a OCaml or a Haskell application using it ?


Nice- why hono over Fastify?


Runtime agnostic, why get stuck with one js runtime?


Interested in that too.


Thanks!

Fastify is great, I just like Hono more ¯\_(ツ)_/¯


not to be pedantic, but the examples you're describing are _process_, not _culture_.

Examples of culture would be: do nurses feel comfortable speaking up / questioning the lead surgeon? do surgeons feel like they can voice when they're overworked without fear of being perceived as a failure?


The price of an iphone has remained constant over the past ~20 years. It's arguably already fairly cheap for a device that is so important for everything we do (especially considering that there is marginal benefit to upgrading every year these days and an iphone from 5 years ago still functions fine)


How close to Postgres does this need to be? Like could you host this on Aurora DSQL and have unlimited scalability?

Or how would you scale this to support thousands of events per second?


I’m not the OP but DSQL has such limited Postgres compatibility that it is very unlikely to be compatible.


supabase has the most schizoid brand. They should just embrace the fact that mobile devs don't want to do backend for their crud app rather than trying to be the "everything backend".

why are they releasing a web component library when React Native doesn't even have a decent UI library? Like who are supabase's customers again??


[supabase ceo]

Despite the title "UI library", this is more like a "component registry" for developers to bootstrap their applications and it will work for everything from Web & Mobile, to database scaffolding. Perhaps some poor naming/positioning on our part.

If you aren't familiar with shadcn, it works by dumping a bunch of files into your application which you can modify at your leisure. This is a different from something the "Bootstrap" approach where you could only do minimal tweaks to the theming.


The distribution person in me commends you for cashing in on the "component registry" hype in such a creative way. And that's not a backhanded use of the word creative: the moment it clicked was great, really just a smart application of something people are hyped about right now!

But the developer in me, who realizes how bad we are at design realizes shadcn/ui is terrible for 99% of people using it. They don't have a design system, don't know what that even means, and their sinful hands should not be tweaking any UI libraries, at most being limited to modifying a rich set of theme tokens that force consistency... not touching individual components or copying in random cruft with hard coded gaps between elements.

And so for all that (and also tl;dr) I wish you'd have just shipped actual versioned, buttoned-up components that are well thought out and themeable through tokens, just like your existing auth UI.

You could have even made the default theme shadcn-like to satisfy all the people lying to themselves that one day they'll actually modify that ui folder.


> They don't have a design system, don't know what that even means, and their sinful hands should not be tweaking any UI libraries, at most being limited to modifying a rich set of theme tokens that force consistency

I feel like the majority of shadcn users doesn't tweak it whatsoever though. In fact - I bet you a large percentage has never even considered the ability to tweak it, just seeing it as a plug-and-play set of UI components. Think old Android and iOS apps, where almost everyone just used the default components.


I mentioned that:

> You could have even made the default theme shadcn-like to satisfy all the people lying to themselves that one day they'll actually modify that ui folder.

That's why shadcn is so terrible: you're resorting to diffs and a loose convention instead of a stable API and a package manager, for the promise that one day you can definitely probably modify it into something else... yet if you're the kind of person to start with shadcn/ui instead of Radix, you shouldn't be modding components in the first place.

Even if you get real designers later, they're not going to try and "evolve" shadcn into your brand, they're going to start from scratch and you're back at Radix again.


> satisfy all the people lying to themselves that one day they'll actually modify that ui folder.

What I'm saying - they're not lying to themselves that they'll do that. They've never even considered it as being something to potentially do! They consider it as a package to use as-is. "A re-design later on? Who knows, by that time we'll have people who know their UI stuff, they'll figure it out. Whether that will be based on shadcn? Who cares, not important." I bet the premise that all these people are using shadcn with the idea of some day modifying it just isn't the most common reality.


So you're just not familiar with shadcn then, that clears things up!

The main selling points and source of shadcn's meteoric rise...

- It's not a component library!

- It's easy to customize!

- You just Ctrl + C, Ctrl + V!

- You can just edit it in your project!

- No more fighting themes!

- It's a kickstarter for your design system! (contrary to my words that you're repeating, many people choose shadcn/ui thinking it is going to make a meaningful difference in starting their own design system, people who have no business starting design systems especially)

> I bet the premise that all these people are using shadcn with the idea of some day modifying it just isn't the most common reality.

The thing is literally distributed via copied files and updated via diffs instead of being a package. The entire cargo cult that lead up to that is 100% the idea they'll modify it. It's just either don't out of apathy (and should have just used a component library), or do and do so terribly (and should have just used a component library).


I am. I'm saying that what you see as its main selling point:

> It's easy to customize!

May be very overstated, with lots of users not caring or even knowing about that as a selling point. They just use it for all of the other selling points.

> The thing is literally distributed via copied files and updated via diffs instead of being a package. The entire cargo cult that lead up to that is 100% the idea they'll modify it. It's just either don't out of apathy (and should have just used a component library), or do and do so terribly (and should have just used a component library).

It definitely started out that way. Just like sports brands started out selling trainers for, you know, sports. And now 99.9% of the minutes-worn for them is during non-sports activities. But they're meant for sports! That's their selling point! Sure, most people couldn't care less though, and don't even buy them with the idea of using them for sports.


None of the other selling points are any more applicable to the average dev...

And it didn't just start out that way, the only package is still a cli tool that will then diff your files.

Overall, it really doesn't make sense to paint this all as some now discarded origin story, especially when you're in the comment section of a major launch that's based on shadcn and selling itself on the exact same story.

But if that's how you see it, let's agree to disagree.


I am not much of a philosopher and don’t have strong opinions about these things. I argue mostly from a utilitarian point of view. I think it is fair to say most people find shadcn useful hence its popularity. As to whether it is the “right” way to build/compose UI will depend on the context.

For the absence of design tokens for instance, shadcn is remarkably themable using the main css file.

Also, most people using it are probably prototyping or building a school project or something - most are not businesses making big bucks.

So from a utility point of view, it is great. The rest, I leave to the philosophers


Well either you can be a philosopher for yourself, or you can be one of their subjects, consuming what they throw down on you. The way I see it, you might as well understand the muck they're leading you to under the guise of utility.

But if you're never aiming for anything more than prototypes and school projects, I agree, do whatever.


If 23andMe is bounded by the legal system - why shouldn't we trust them the same way we trust Dropbox not to sell our personal files?

No one has been able to explain this to me.


Because on average your files on Dropbox are worth way less than your DNA.


Have you seen the legal system lately?


sure - so this isn't a unique complaint with 23andMe then? We should be deleting our data from Dropbox, Google, etc...?


Now you're getting it.


I never used gmail or Chrome because I didn't want to become dependent on tools controlled by a search and ad company.

I definitely feel now that was the correct instinct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: