Hacker News new | comments | show | ask | jobs | submit login
Show HN: Coherence API – Modern Serverless Code. Pass a func and it runs online (coherenceapi.com)
41 points by bthornbury 4 months ago | hide | past | web | favorite | 35 comments



The entirety of the docs talk about how easy it is to set up a job, run job, have a recurring job and then pricing is in compute Gb hours. I've literally no idea how those two things relate to each other in real world use. Its a bit of a cognitive disconnect that I'm sure makes a lot of sense having built it but not to me as a possibly interested developer.

Some examples of how many job iterations would use 1 Gb hour would help, or maybe I'm missing them?


This is great feedback, and initially I had a calculator for this. We removed it because we felt it would be intuitive but you're right we built it and it's easy to lose that outside perspective.

It's really just the running time of your program (you control this) plus a less than a second of setup / tear down time. If your job is set to consume 1GB that's it, you've consumed that many GB-seconds. If it's a 128MB job, then you've only consumed 1/8th of that many GB-seconds.

Let me know if this example is getting it across, it's a simplified version of the pricing of other serverless platforms.


Any plans to have pre-paid credits?


Hey, we could do this but I'm curious why you would prefer this? More solid payment expectations?


Perhaps. One of the things I sort of like about Twilio is that they can do "credits" and I have the option to only pay for a certain amount.


For me to evaluate something, the amount I'll be billed must have an upper bound that I control. So yes, more solid payment expectations.


Same ^

I really need to guarantee controlling unbounded costs. I would think most people have similar expectations.


Because I'd hate for a bug in my code to result in an accidental infinite loop at 2 AM and then wake up to a $1,000 bill. I'd rather pre-pay and have my task get killed if I run out of credit.


In Germany almost no one owns a normal credit card.


How do you usually purchase things online?


Direct debit, invoice + debit transfer, PayPal are all fairly common options in Germany (+ a bunch of smaller services, + the option of handing the payment to the delivery person, but that's fairly expensive, so I assume it's not used all that often despite being often available)


Hey HN, I've been eager to share this with you. I've finally released and I think this is going to be very useful for a lot of devs who are like me.

Coherence API is a twist on serverless that lets you deploy and run code the same way you call a local function. It integrates with the programming language so you can just pass a func and it's getting run. No need to change your workflow or toolset.

I'll be here to answer any questions.


If I build 92 of these things and they communicate with each other over sockets, files, streams, or messages, how does Coherence (a) help me visualize and debug that network, (b) transition the processing graph to a new state safely (blue/green deployments to one or more collaborating components)?

The problem that I have with these Serverless stacks is that they all demo pretty well for the trivial case of "it sends emails / transcodes video / puts files in S3" but then jump to "run your entire business on this" without clearly showing how its possible to do that with a complex system.

Is your solution different?


Hey, help me understand how you envision Coherence could help you debug that network.

I'm moving over some of my own services to Coherence so it's very much intended to "run your whole business".


There is so much more to running a cloud system than running a function - monitoring the function (billing, errors, throughput, latency), scaling the function, moving data between two functions, storing data, signaling a function, upgrading a function, downgrading a function, upgrading the stored data, authorizing the function to run, upgrading and downgrading stored data and functions while taking customer traffic with zero downtime without data loss.

"Backend-as-a-service" businesses need to check all of those boxes.


I'm going to decompress part of your comment here:

- scaling the function

This is where we've invested most of our effort. We're envisioning a load test demo soon.

- monitoring the function (billing, errors, throughput, latency)

This is something which we will need to invest more in. Great feedback here. We don't have much in the way of monitoring yet, but everything is there and we just need to expose it to the users.

- moving data between two functions, storing data

Traditional solutions like Redis and Postgres seem like a good solution here. You can't host them on Coherence yet, but it's something that we're working on.

- signaling a function, upgrading a function, downgrading a function

Versioning is particularly interesting here. A lot of time we've thought of these functions as ephemeral.

Send me an email (bryan at coherenceapi . com) I'm really interested to talk more about the use cases you're envisioning.


I would argue that in your use case, serverless in general is not a technology that will work for you, and trying to make it work is a square-peg-in-a-round-hole problem.


how do you stop someone from entering into some kind of malicious code?


We've made the assumption that users will enter malicious code and designed the security boundaries around that.

A malicious user that manages to break out of the container environment will only have access to other instances of their own applications. Stronger security policies exist between users.


I read the concern more as "what do you do to stop someone from cURLing an .iso a million times from someone's S3 account".


I'm not too sure what you mean here.

In terms of DOS'ing our services, we limit disk space usage per job, and are working on bandwidth limitations (this is a weakness, but for now we'll just ban abusers and work with those with legit needs)


Not DOSing your services, using your services to DOS others.

Or things like using stolen credit cards to do Bitcoin mining.


Every services provider will deal with this.

We have plans but ultimately won't really know until we're in the thick of it.

Our plans though are to limit bandwidth and ban abusers. We'll work with Stripe's Radar platform to reduce the stolen credit card fraud.


I've made this comment before (and was subject to public ridicule and mass downvoting for it) - I'm reluctant to try new products and services if it requires me to enter CC details. This is not a rant, I know you have a business to run and it'd better sustain itself. I really want to try this service, yet I won't.

And, btw, I don't even own a CC.


I'm aware this is going to turn a significant number of people away.

The fact is in the early stage, this will help restrict fraud and bring in more engaged customers to ensure their expectations are met.

It's not something we plan to require forever.


Sure, the reasoning is clear, we're in a "fraud first" world. Still yet, if there was some sandboxed setup I could use, I'd happily do that. And if it works, I'd happily hand in (my employer's...) billing info thereafter.


Can you describe what are the limitations you might be willing to live with on a frictionless sign up?

For example, if you only had 2 compute hours, would it be enough for your evaluation?


Absolutely! 1 hour would suffice, and any computing/IO resources could be limited to a near zero.


This is common feedback I'm getting for those interested in hobbyist tier.

I'm thinking we'll put something in place for free frictionless small evaluation after we think about fraud management a little bit.

Thanks for the feedback here everyone this is really invaluable!


Congrats on shipping!

I see that you worked on .Net at Microsoft, + supported languages at present are Python 2,3 and .Net Core.

So.... How different is this from Azure Functions? https://azure.microsoft.com/en-us/services/functions/

Cos I can create a Serverless API using Azure functions also, right? https://docs.microsoft.com/en-us/azure/azure-functions/funct...


Hey Thanks!

This is the most common question! Other FaaS solutions require your code or your workflow to be structured in some way to fit in with their system, not to mention requiring you to use their tools.

Coherence integrates with the programming language to make it a lot simpler on the developer. Just pass a func and it's getting run. Structure your code however you like and use your favorite tools!

EDIT: One example, it's going to be a lot easier to run your web crawlers on Coherence.

EDIT 2: Yes I worked on .NET Core and am a python enthusiast for over 9 years. This is why they are the first supported languages.


Congrats on the launch!

I'm curious what your cold-start times are like (knowing that build sizes and languages affect that too). Also, any plans to support Go?


Thanks!

Our cold-start times have a way to go, they're around 1 minute on the first invocation of some set of binaries and less than 20 seconds on subsequent invocations.

We've done a lot more work on optimizing the subsequent invocations time, including even running a forked version of docker (it reduces docker save time by 100x or more)

This is a primary scenario for us though and there's a lot of room for further optimization. We'll be posting figures and progress regularly post-launch.

EDIT:

I missed some of your questions, we need to upload your project's binaries only (not package manager binaries). So languages with larger binary sizes will have longer cold start times. Overall this comes down to the factoring of the project though.

We should be able to support Go since the binary is static linked, but it has less portability than other managed languages so we've targetted those first as lower hanging fruit.


Awesome, thanks for the detailed reply!


Once there was picloud doing the exact thing just for python. I'd always dreamed of a .net port. Well done!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: