Hacker News new | past | comments | ask | show | jobs | submit login

That's pretty impressive. Anyone using minio in production? What's the backup story?



I've done Kubernetes consulting with some other people and for their on prem solutions I always recommend just buying a TrueNAS with it's s3 api.

The other people I tend to end up with always try to sell people on minio+rook-ceph and then offer support along the way. So basically you buy their kubernetes deployment and then you have to pay them for the rest of your life for troubleshooting. The TrueNAS seems cheaper to me.

I don't see a good backup story, but maybe I just don't know it.


TrueNAS just packages minio, so it would seem to be the same result but with less flexibility and visibility to the cluster. Although if the strategy is to not hyperconverge storage that could be desired I guess


You are indeed correct, but minio is actually a lot of components. And in the case of TrueNAS they handle most of the storage related components with a minio object gateway running on top.

A pure kubernetes deployment is more complex(although it's all part of the same binary i think).

I could be completely wrong though.


In truenas scale (based on linux not freebsd) they actually run most integrated and 3rd party "apps" (including minio) inside a k3s cluster running on truenas itself


Is TrueNAS Scale stable yet?


It seems to be under pretty active development but I do like their strategy of releasing early and often. I haven't run into any issues yet with it in my lab.

My biggest wishes for Truenas Scale:

1. NVMe-oF support (over RDMA or TCP)

2. aarch64 uefi iso

I could see Truenas Scale overtaking Proxmox soon in the KVM space, their API and UI are already more enjoyable to use IMO.


They're in RC now. In terms of data loss: I'm not concerned with what I've seen. Some features are still missing, such as SED control. I think most of things I wish were there aren't present on core either, such as VM VNC clipboard and solutions to the web UI logging out often when 2FA is enabled and some mysterious directory locking issue when pulling dropbox.


So ZFS on truenas/freebsd is in some weird twilight zone of great stability and horribly broken, we ended up having VM corruption after removing an l2arc cache drive on truenas.

I do agree with the Ceph being much more complex which is why I hope OpenEBS implements zfs-localpv send recv for a poor man's replication.


Could you elaborate more? I’ve been evaluating using zfs for my new home work station and would like to understand what failure modes I need to be aware of


ZFS is still the best FS out there but a good rule of thumb is to look over the open and closed issues and see how things are handled and the problems for a given project, a sort of issue-driven-assessment.


Ah, like the hibernate/suspend and swap file issue for mobile computers?


Truenas with S3 piqued my interest. Does that use minio under the hood to serve objects? This doc indicates it might at least use the browser component: https://www.truenas.com/docs/core/services/s3/


Pretty sure it’s minio for everything.

Mainly use it for gitlab object storage current. Logs docker images etc


> I don't see a good backup story, but maybe I just don't know it.

Where do data-management SaaS companies like Rubrik.com and Druva.com fit in? Are they not popular enough solution to secure min.io deployments?


Have you ever seen people using rook-ceph and the built in object gateway Ceph provides? AFAIK Ceph's object store powers some official cloud solutions out there like Digital Ocean spaces


Yes, one of my clients uses it. It seems okay, but it's all pretty low volume and most people have no real concept of disaster recovery. I think backups and DR are things that most people take for granted and don't really think about.


Yep reasonable -- most people these days don't have to, since the clouds do the hard work for most people. The knowledge on how to run those kinds of systems becomes harder to find (in the community) by the year.

Also by the time most people account for 2/3 copies of the data + 1 completely offsite backup of data I think their eyes might start to water at the cost if you want reasonable performance as well. I experiment with this kind of stuff a lot of the time and always am surprised at how much more than you'd expect it costs to get drive, node, and region level redundancy with backups. You need essentially at a minimum 3TB for every 1TB of usable storage assuming regular RAID1.


That's the cost of privacy until there a byopk cloud service.


I'm curious is the use case for Kubernetes and MinIO mainly just backups or are there are good use-cases for Kubernetes and object storage you are seeing?


Hedge bets against an s3 outage. Cloud agnostic solution with abstraction of object storage. Reducing the cost of s3 by using minio as a cache.

Plenty of good use-cases.


Devs can also spin up a local object store to test with instead of hitting some common remote bucket.


Of course it's used in production. We use it heavily.

It doesn't have to use local disks. A big part of what it does is to provide a S3-compatible API, which you can use with any number of backends. Even other cloud providers. Say you want to be able to deploy on prem, on AWS, and GCP. You can add MinIO in front of all of these and you app won't care.

Of course, if you are writing to an actual disk you'll have to figure out the backup part.


I've only heard horror stories from people using this on real apps. The last one was a bug where MinIO wouldn't actually assert a file had been updated or something. Maybe they've improved since then, but I wouldn't call this "production ready" for the time being.


I think you're thinking of this comment thread: https://news.ycombinator.com/item?id=28133902

I was also surprised by many of the comments. I had only played with minio a bit, and was considering using it. The durability comments were concerning, and there were other issues too.

One example is this bug: https://github.com/minio/minio/issues/8873

Basically, it was treating object names like foo//bar in them the same as foo/bar, except for sharding, which thought they were different.

Their fix was just to disallow '//' in an object name, even though other s3-like implementations allow it.


Given that MinIO seems to be canonical way to add S3 API compatibility to Azure Blob Storage[1], this is not very encouraging.

We have a lot of products that use S3 API and Azure was the only cloud that did not offer that out of the box - at first I could not believe this was the case, because even smaller providers like Linode or Digitalocean offer S3 API compatibility. But then I found the post linked below.

[1] https://cloudblogs.microsoft.com/opensource/2017/11/09/s3cmd...



It's good that there are more alternatives, this seems like it might be a lighter weight alternative compared to MinIO.

Still, I wish Azure just added a built-in compatibility layer, like virtually every other cloud provider - I'm not a big fan of having to spin up a container just for this reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: