Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: RealtimeApp – Deploy a realtime app using serverless components (github.com)
98 points by ac360 9 months ago | hide | past | web | favorite | 46 comments

Can we stop using YAML for configuration and instead use a real programming language with possibly a type interface and a compiler? So much time wasted looking up possible configurations and worse, learning that some configuration will conflict with another.

Done! This project can be deployed with YAML or programmatically w/ Javascript.

Here is an example of a full Chat Application that demonstrates how you can provision and extend this Serverless Realtime Application Component programmatically, very much like a React Component.


The purpose of CloudFormstion is not to tell AWS how to do something it’s to tell it the end state. How do you propose using a programming language where you can just change a resource and let the system figure out current state -> desired state and decide what needs to be created, updated, deleted, replaced?

But there are CloudFormation Linters.


And there is also the CF editor for Visual Studio.


CloudFormation YAML being terrible is well known, to the point that AWS did precisely what that person suggested.


They are in the process of releasing a kit to define your infrastructure programmatically, and let them deal with the mess of CloudFormation as a compiled IR they send to their backend when you deploy.

That’s not what CDK does:

You’re still defining your resources declaratively - just with type checking - at least if you’re using a statically typed language. If not, you’re still just getting runtime errors when you run your program. It might catch some things before you run your template - but so will a linter.

Neither will catch issues like using a subnet that doesn’t exist or using an AMI that doesn’t exist.

> Neither will catch issues like using a subnet that doesn’t exist [...]

The CDK does: you’re either explicitly bypassing the safety checks, are explicitly importing a value (which necessarily is validated at runtime), or have to provide a subnet definition to instantiate the relevant object.

You then use these programming language constructs to implement higher level ideas like applications, fully in a programming language.

Serverless Components keep track of all the resources it creates/updates and their properties locally in the .serverless directory. It also keeps it synced with the actual state of the resource on the provider (in case you've changed them in the console for example)

So if you change the configuration, the component will be able to figure things out.

Does Serverless handle all of the AWS resource types that you may need for your lambda?

You can write your own component that handles whatever service you need and share it with the community. The more components that are built, the easier it'll be to write higher level components that use those.

As of today, those are the available components:


I’ll be the first to admit that most of the missing resources would be useless for lambda. There are a few missing ones that might matter.


@scarface74 Let us know what you need. We have about a dozen more coming out within the next week.

@scarface74 We have three projects. Here's how they may be helpful to you:

1) Serverless Framework

The Serverless Framework enables you to provision serverless applications in a Function + Events pattern. Using this, you can accomplish the patterns you're describing, and hundreds of thousands of people are doing this already with the Framework.


2) Serverless Framework Enterprise

SFE works with the Serverless Framework to give you more than development and deployment convenience. SFE focuses on other phases of the serverless application lifecycle, like monitoring, alerting, security, collaboration, secrets and much more. It's a powerful solution for teams and orgs investing more in serverless application development.


3) Serverless Components

This is a new take on serverless application development. We've learned the serverless community and teams are looking for ways to deploy and share composable architectural pieces (features, use-cases), more than infra. So we're building a new type of provisioning system to enable this. The Realtime Application Component is an example of this.


Most of the lambda related things I do are backend ETL and message processing.

Is Serverless just for API use cases or does it support other event triggers like SNS and SQS or one of the patterns we use is

S3 Event -> SNS -> SQS -> lambda and all of the related permissions and subscriptions.

But then again, isn’t the whole purpose of using Serverless instead of SAM to be cloud vendor neutral? Once you add AWS specific resources doesn’t that go against the whole ideal of using Serverless?

I heard a good reason why we shouldn't use a programming language for configuration as it can lead to incredibly custom setups that can be crazy difficult to upgrade or for new developers to understand what's going on.

this could be used programmatically as well via Serverless Components. However, if your configuration is simple, that's what YAML is made for.

Here is a real-world example built using this project. It's a full Chat Application:


You can deploy this example. It contains a Create React App front-end and it uses DynamoDB on the back-end to keep track of who is connected to the Chat Application.

Amazing. I am so happy to see serverless getting more capabilities and still remaining so clean and simple. Any plans to do anything around mobile push notifications?

Yes, we're looking at providing a great solution for this.

In the interim, check out this Component (which is used by this project) which provides a simple webockets backend and can be used for that use-case - https://github.com/serverless-components/Socket

Two points on homepage design:

- Why does it take ~4 seconds to load (blank) the page? Is it a PWA with no pre-loading / static initializers?

- The scroll-jacking in the 1-2-3 section is broken on Safari

Agreed on both points, it shouldn't take 4 seconds to load a landing page that's only displaying information. I'm also experiencing the same scroll-jacking problems with Chrome.

Kinda disappointing because the Serverless project is very useful, and while the engineering behind the framework is great, the site has way too much going on for how simple it should be.

Having seen a lot of these types of services there is always a question about scale. In other words, these are great for quick prototyping, but at a certain level you have to move off a service like this onto a more robust infrastructure.

That being said, how hard is it to move the data off this platform and migrate it to another once you've reached scale?

There are many case studies of AWS Lambda's scaling power and its constraints are well documented. The newcomer in the serverless architecture is AWS API Gateway Websockets, and I personally have not performed any tests using this service within a variety of use-cases at significant scale (it's still a new service).

When it comes to migration, it depends on your use-case. A few suggestions which may be helpful:

* Consider your data strategy (much lock-in happens at this level). Most serverless architectures leverage DynamoDB because its design pairs well with stateless compute like AWS Lambda. You can use DynamoDB within all types of architectures.

* Don't use AWS API Gateway specific URLs in your application. Use custom domains. This will enable you to easily swap out your websockets endpoint. Fortunately, there is only 1 endpoint in this architecture, so there is little surface area to refactor.

* Make sure you understand the cost of these serverless technologies at scale before building with them. But don't forget to factor in the reduced labor costs of using serverless technologies.

> That being said, how hard is it to move the data off this platform and migrate it to another once you've reached scale?

The difficulty really depends how deeply integrated your application becomes with the underlying cloud provider and its services. It may be a simple application consisting of few a JS Lambdas and DynamoDB tables, or it may utilize half of AWS' offerings. In the case of the latter, may god help you.

To clarify, in case it hasn't been made clear already: This is not a new service, but rather some tools built on AWS. So AWS is the platform, and you get all the benefits of Amazon's scale when using Serverless.

What exactly is the "scale" problem here?

Since this is a general purpose platform with not much in the way of functionality. Meaning, you're not going to build the next Flappy Bird on this platform.

My question is what is the ceiling? At what point do you need to move off the platform and how easy is the migration?

I find people are pretty gun shy to park their data into a platform that doesn't make it easy to migrate off (Parse comes to mind).

For clarity, the platform here is Amazon Web Services. We're just providing tooling to help you build and manage serverless architectures on AWS easily.

I think its brave to have real time and serverless in the same sentence when it can take minutes to "warmup"

"Minutes" is a false statment for many runtimes and use-cases on AWS Lambda, including this one.

I've included some performance info in a comment below.

This is great! Is it possible to setup only the backend and have an iOS app as the frontend?

Absolutely. This project is built on Serverless Components, which are composable architectural pieces based on serverless cloud infrastructure.

One of the child Components powering this project is the Socket Component, which allows anyone to provision a serverless websockets backend simply. You can use it here: https://github.com/serverless-components/Socket

I don't see anything about deadlines here. What kind of "realtime" is this?

It's real-time web, i.e. WebSocket, not to be confused with real-time computing.

Huh, interesting. Looks pretty similar to some of Fanout's stuff. https://github.com/fanout/flychat

Does this method run into the typical lambda cold start problem? If not, how does it get mitigated without say, a regular cron job polling the endpoint?

In a websockets implementation w/ AWS Lambdas and AWS API Gateway maintaining connection state, your AWS Lambdas are invoked whenever someone 1) connects 2) disconnects 3) sends a message.

This project's pattern uses a single AWS Lambda function for all of those events, for the sake of simplicity as well as performance because by receiving more events, the function is kept warmer than average.

Further, when the user loads a page, if you establish the connection at that time, you warm up the function in the background by sending the connection event.

Here is full Chat Application example you can deploy to test performance: https://github.com/serverless-components/RealtimeApp/tree/ma...

When deployed to us-east-1 and using it in San Francisco, when the Chat App loads, it establishes the websockets connection immediately via React's componentWillMount(), which will warm the AWS Lambda Function. By the time I send the first chat message, the function is already warm, and it takes ~100ms on average to send and receive messages.

I believe once you're connected, your lambda will stay warm await data. But if there are no active connections, the lambda might get cold.

what about costs? How many requests will occur in the lambda side with sockets? It's an interesting concept.

Good question. As always, it depends on the use-case. But to be helpful, we've added the relevant pricing pages here: https://github.com/serverless-components/RealtimeApp/blob/ma...

Now I'm going crazy over the sockets after looking at the pricing!!! :)

How can you test this or other serverless apps? Is it possible to run a local env that replicates AWS?

We've been building serverless application dev tools for ~4 years now. In the process, we've seen incredible efforts to emulate the cloud locally. The result is usually the same: it requires tons of effort, the work is often brittle, and it still is not identical to how your app will perform in the real cloud environment (e.g. you deploy and immediately run into an API limit error). Further, the reason developers mostly want local emulation is because deploying to the cloud is simpler too darn slow.

Given this experience, our hypothesis is if we can greatly increase the speed of deployment, developers will be less interested in local emulation.

With Serverless Components, we're now achieving incredible results. First deployment always takes a bit longer due to initial creation of resources, some of which are global (e.g. AWS IAM), but all Serverless Components are designed to deploy as fast as possible and we're aiming for ~5s max in deployment time as our key metric. We'll go into detail later about how we've been able to optimize this.

I am a huge admirer as a "liberal arts grad" who grubs around technology like a blind truffle pig. Great use cases to learn from.

Backgrounds don't matter. Software development should be accessible to everyone. Serverless tech will greatly help enable this.

I was wondering this too a few years back when serverless started to gain traction. I remember thinking it sounded like a lot of mock infrastructure. Just did a search and found the following links interesting:

https://medium.freecodecamp.org/the-best-ways-to-test-your-s... https://github.com/lambci/docker-lambda

It seems non-trivial to set up at this point

EDIT: Still reading into it... https://serverless.com/framework/docs/providers/aws/guide/te...

Would love to here a best practice case study from someone who has deployed something like this

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact