We provide an easy to use platform to build production-grade workflows for all your data, including image, video, audio, or document processing.
To provide a little bit of context, we previously developed Scaleway (https://scaleway.com/), a European Cloud Service Provider, and started Koyeb initially around multi-cloud object storage (https://news.ycombinator.com/item?id=21005524)
We are now going a step further: we are trying to also provide an easy way to process data and to orchestrate distributed processing from various sources.
Currently, we provide an S3 compliant API to push your data, you can implement processing workflows using ready-to-use integrations (https://www.koyeb.com/catalog) and store results on the cloud storage provider of your choice (i.e. GCP, Azure Blob, AWS S3, Vultr, DigitalOcean, Wasabi, Scaleway, or even Minio servers).
We're working on adding support for Docker containers and custom functions to let our users combine catalog integrations with their own code in workflows. We will also add support for new data sources to send, ingest, and import data from different services.
We of course take care of all the infrastructure management and scaling of the platform.
The platform is in early access phase and I'd love to hear what you think, your impressions and feedback.
Thanks a lot!
Given that `steps:` is a list, isn't having `after: video-clipping` redundant, since it already comes after the video-clipping step?
We have some tooling to develop individual catalog integrations locally and test the integration with object storage works as expected. We plan to publicly release this tooling for all users to be able to test their functions locally before using them on a workflow.
For workflows environment, currently, right now, you have to create one processing Stack for each environment, i.e. dev, staging, and prod. Later on, we want to spawn environements for each Git branch.
We allow you to build, deploy and run (i.e. we operate the infrastructure) processing workflows using ready-to-use integrations, containers, or custom functions. We also provide a multi-cloud storage layer, you can use and push data stored on multiple cloud storage providers with a simple S3 interface instead of having to deal with each object provider implementation.
We also plan to be compatible with the serverless framework for the custom functions part.
Dependency injection frameworks were there already and decided against it at the end. I wish CI, data pipelines, config management, IaC tools and all other modern users of YAML-fits-everything approach learned from the past.
What we need is an ease of creating mini languages, but apparently it is not something industry/academia is aiming at, so this skill in not existent among common engineers, which often come up with ideas of new products.
How did dependency injection frameworks solve this problem then?
I think that's why most of the popular dependency injection frameworks I see support configuration through XML or JSON.
Secondly, unless you're selling compiled software for others to configure, there really is no difference between config in config files and config in code, except that the former does not let you build such meaningful and useful abstractions. You're presumably delivering it all into production via CI/CD anyway, so what difference editing a config file vs. some actual source code, besides the lack of real type safety and IDE assistance?
Pulumi have figured this out.
(And I suspect that if your end users need to configure your software's use of Spring's Dependency Injection in XML config then you are probably doing it wrong.)
FWIU, their path is not the right one for DevOps.
Dhall and Cue have the correct starting philosophy, i.e Turing incomplete
Your users still have to learn your new language. That’s the part designers are trying to avoid.
At Koyeb we also have an S3 compatible layer to let users send, process, and store (on any cloud or your own edge Minio) data. As @bdcravens said, its where similarities end, plus Koyeb is entirely managed.