Super interesting project. But I cannot understand why you support only EC2 instances as clients. For what it is worth, it looks strange and limiting. By default I expect to be able to use Regatta Storage from everywhere: from my local machine, from my Docker containers running elsewhere, etc.
This isn't a technical limitation, per se, but a time limitation in terms of getting to the place where we feel comfortable supporting those environments for the public. I still wouldn't recommend mounting it from a local environment (because NFS behaves pretty poorly when it can't connect to the server), but we do have a CSI driver for containers running in K8s. We expect that customers will get the best experience if their instances are very close (latency-wise) to our instances, which is why we only support access from us-east-1 in AWS. We expect to launch in more regions and clouds in the coming months.
If you want early access to other clouds or the CSI driver, feel free to email hleath [at] regattastorage.com.
I used the same approach based on Rclone for a long time. I wondered what makes Regatta Storage different than Rclone. Here is the answer: "When performing mutating operations on the file system (including writes, renames, and directory changes), Regatta first stages this data on its high-speed caching layer to provide strong consistency to other file clients." [0].
Rclone, on the contrary, has no layer that would guarantee consistency among parallel clients.
This is exactly right, and something that we think is particularly important for applications that care about data consistency. Often times, we see that customers want to be able to quickly hand off tasks from one instance to another which can be incredibly complex if you don't have guarantees that your new operations will be seen by the second instance!
Running sqlite over rclone is not a disaster as long as you run only a single instance working with that database. Rclone provides no support for locking semantics.
For the misleading part, I probably should have said confusing because I don't think you intended that, I mean that instead of introducing your caching layer you make it about S3, where the Object Storage provider seems totally interchangeable. Though it seems to work for a lot of your audience, from what I can tell from other comments here.
As for Express One Zone providing consistency, it would make more groups of operations consistent, provided that the clients could access the endpoints with low latency. It wouldn't be a guarantee but it would be practical for some applications. It depends on what the problem is - for instance, do you want someone to never see noticeably stale data? I can definitely see that happening with Express One Zone if it's as described.
Yes, I think this is something that I’m actually struggling with. What’s the most exciting part for users? Is it the fact that we’re building a super fast file system or is it that we have this synchronization to S3? Ultimately, there just isn’t space for it all — but I appreciate the feedback.
I think they both go together. It might take about 10 minutes to give a good high level explanation of it, including how the S3 syncing works - that the S3 lags slightly behind the caching layer for reads, and that you can still write to S3. 2-way sync. I imagine that S3 would be treated sort of like another client if updates came from S3 and the clients at the same time. It would probably be not so great to write to S3 if you aren't writing to somewhere that's being actively edited, but if you want to write to a dormant area of S3 directly, that's fine.
From the article: "Intel believed it was justn’t a big deal". Is it a thing in English language to form "justn't" like that as a shortcut for "just not"?
Despite being developed in 2010s-2020s, the codebase gives those warm 1980s vibes. Microcomputers, endless possibilities, bright future ahead. What a departure from the modern world of doom.
Yeah I think this is the biggest issue. Even if you know precisely the configuration of the target computer, you can’t know if there’s going to be a cache miss. A conventional CPU can reorder instructions at runtime to keep the pipeline full, but a VLIW chip can’t do this.
In reality of course you don’t even know the precise configuration of the computer, and you don’t know the exact usage pattern of the software. Even if you do profile guided optimization, someone could use the software with different data that causes different branch patterns than in the profile, and then it runs slow. A branch predictor will notice this at runtime and compensate automatically.
It is not that Sufficiently Smart Compiler arrives or not. The problem is that VLIW architectures are a moving target - you can only really optimize for one specific chip. The next iteration of the same architecture brings a totally different superposition of performance considerations, thus rendering the previous optimization strategies inefficient.
This is the Achilles Heel of any VLIW architecture. A Sufficiently Smart Compiler gets outdated with a new chip revision. The previously compiled binary files that worked fast on a previous revision of the architecture, start to work slowly on newer chips.
Magnesium L-Threonate - has the most potent therapeutical effect because it can effortlessly cross blood-brain barrier. The drawback is that some people are sensitive to this form of magnesium, those people can have nausea, vomit, migraines, etc. IMHO, I would advise against everyday use because this form is more a medication than a supplement. It is used for serious conditions like dementia, neurological impairment, nutrimental deficiencies.
Magnesium Taurate - a combination of magnesium and taurine. A good form for people with metabolic conditions: T1DM, T2DM, hyperlipidemia, vitamin and mineral deficiencies.
Magnesium Glycinate (aka Magnesium Bisglycinate) - a bit less potent form of magnesium, but has good bioavailability, fewer side-effects. This form is also a source of glycine which is an important amino acid beneficial for metabolism, has a mild calming and stabilizing effect on nervous system. Helps to cope with anxiety, panic attacks, insomnia.
Magnesium Citrate - a cheaper but ok magnesium form for everyday use.
Magnesium Oxide - the cheapest and the least efficient form of magnesium. Unfortunately, this is the most widespread form in many countries due to its low price. Try to avoid this form if you have a choice.
Bonus point: if you have a specific condition, you can combine several forms of magnesium to reach multiple therapeutic goals. For example, some popular combinations are presented below:
a. Magnesium Taurate + Magnesium Glycinate
b. Magnesium L-Threonate + Magnesium Taurate
c. Magnesium L-Threonate + Magnesium Taurate + Magnesium Glycinate
I have been taking 300mg of Magnesium Bisglycinate 30 mins before sleep for the past 5 months or so. I have anxiety which can lead to insomnia. It has been a great help.
I think you may be confusing it for something else, magnesium chloride is literally a salt so there's nothing to degrade, and it's not going to react with air (if it did, it'd release chlorine gas).
Like sodium chloride, the most it will do is grab moisture out of the air and try to recrystallize into bigger clumps.
Edit
It occurs to me you may have mean "It doesn’t lose potency over time!", in which case, true!
The story misses a continuation. What happened after CTO found out that the system was silently replaced behind the covers? IMHO, this is the most interesting part. I bet that the CTO was crushed by his ego.
reply