The docs, apparently is still under construction. But, it has similar features to fly.io & Render. They have more regions, support for more computing power, and a simple but powerful user experience. They don't have a free plan, but they do give $100 in free credits to new users to try out the platform. All in all though, fly.io, Render, Railway and Klutch.sh pretty much all have comparable features - it's really a matter of personal preference.
The points in this post reminds me of the “configuration clock”[1]. Hardcoded -> configure file -> rules -> DSL -> Hardcoded. Maybe it should be the “configuration ouroboros”. Provides one explanation for why this config becoming code situation occurs over and over and what to watch out for.
This came up in a recent podcast. They thought that the plain clock model doesn't account for real progress being made, and that a better model is the "configuration spiral": it looks like the clock when viewed from above but shifting your perspective a bit reveals that it's actually a spiral that describes how to build abstraction layers. Not that we hit the mark every time, but that a clock is too reductionist.
Precisely. In a case at work, I’ve found that some YAML is complicated because the abstraction level at which it’s describing things is too low; the fix is to move a most of the logic into code and keep some minimal flexibility in the YAML (or use SQL instead, though that choice is more about who we want to empower to edit it).
Can you say a bit more about "performant" or point me to some information? I haven't found any yet. I'm processing millions of protobufs per second and would love to get away from batch jobs to do some incredibly basic counting -- this seems like a fit conceptually...If its a fit, any recommendations on the best way to get those protobufs off a kafka stream and into pipelinedb would be great, too!
Performance depends heavily on the complexity of your continuous queries, which is why we don't really publish benchmarks. PipelineDB is different from more traditional systems in that not all writes are all created equal, given that continuous queries are applied to them as they're received. This makes generic benchmarking less useful, so we always encourage users to roughly benchmark their workloads to really understand performance.
That being said, millions of events per second should absolutely be doable, especially if your continuous queries are relatively straightforward as you've suggested. If the output of your continuous queries fits in memory, then it's extremely likely you'd be able to achieve the throughput you need relatively easily.
Many of our users use our Kafka connector [0] to consume messages into PipelineDB, although given that you're using protobufs I'm guessing your messages require a bit more processing/unpacking to get them into a format that can be written to PipelineDB (basically something you can INSERT or COPY into a stream). In that case what most users do is write a consumer that simply transforms messages into INSERT or COPY statements. These writes can be parallelized
heavily and are primarily limited by CPU capacity.
Please feel free to reach out to me (I'm Derek) if you'd like to discuss your workload and use case further, or set up a proof-of-concept--we're always happy to help!
[Gyroscope cofounder here] We've experienced similar things and that was the motivation for this post -- ticketing systems and startups seem contradictory. What other processes/stack does your company use?
Thanks! Really an excellent tool! I'd originally written a shiny app, but this was much better!
Other than the plot (thanks, gballan), it would be nice to adjust the width of layout cells, especially for the code cells. It would also be cool to adjust the spacing between cells to reduce whitespace when another cell in the same row takes up a huge amount but others do not.
I was not clear how to place markdown text in between or after tables and sliders but might have just missed how to do that.
Hey all, I wrote this little Blab after struggling to visualize the impact of budgeting and investments in the long term. People kept telling me to save and invest, but I wanted to see empirically why this is true.
Features I'd like to add:
- Better taxation rules
- Specific asset classes and investment accounts with
associated taxation, withdraw limits, and contribution limits
- Investment allocation strategy with a changing risk profile
and tax conscious account allocation
- Social security income
- Highlighting the "important" factors
(i.e., the ones that impact the plot the most)
- A probabilistic simulation to get a better result estimate
(i.e., confidence intervals)
If you want to contribute, that'd be awesome! You can also easily fork it. If you have questions, I'm happy to answer them here!
reply