Hacker News new | past | comments | ask | show | jobs | submit login

I used to hate aws for how expensive their bandwidth and storage was, until I started actually using it last year. I think their new serverless stack is about to leave a lot of devops out of a job.

You can setup a a CI/CD pipeline in about half an hour with amplify, at the previous company I remember it taking a good 3 weeks to get CircleCi up and running properly.

And then moving a microservice over to it is basically 1 command, a few options, mostly just copy over the config from your old express backend with a few changes, and you're done. It's insane.

One other dev I've showed the lighthouse scores of the react stack I deployed on it even said "this should be illegal". And they're right, it's pretty much automated devops, the whole ap now loads in 300ms. If you have server side rendering in your app the static content will automatically be cached on their CDNs.

And if you want to save a bit of money you can just use google firebase for your authentication and db. GraphQl is surpsingly a breeze too as a middle layer if you want to leave your java or .net backend apis untouched.

At the end of the day, nodejs is completely insecure by design, your infrastructure will never be as secure as running it on gcp or aws. That's why you go serverless and stop messing with security and front end scalability.

If they solve the cold-start issue of databases on aurora they will completely dominate the market even more than they already have.




>You can setup a a CI/CD pipeline in about half an hour with amplify, at the previous company I remember it taking a good 3 weeks to get CircleCi up and running properly.

>And then moving a microservice over to it is basically 1 command, a few options, mostly just copy over the config from your old express backend with a few changes, and you're done. It's insane.

As a an engineer at a decent sized tech company, this sounds pretty normal, because our infrastructure teams have been providing it (and much more) to service authors via internal APIs/web UIs for much longer than "serverless" has been a buzzword.


Except now you don't need an infrastructure team. That's the whole point of serverless architecture: to be able to scale without having huge team scale as well.


You haven’t needed an infrastructure team since PHP shared hosting, and certainly not since Heroku or Elastic Beanstalk, except that people kept wanting greater complexity at lower cost. There is nothing new about “serverless” there.


There is a difference between what was then and what is today. The key difference is that the "serverless" term is massively overloaded here and once you dissect it you will see that it's a mix of multiple serverlessly managed services that we are able to take advantage of: Kinesis, DynamoDB streams, Kinesis Firehose, SNS, Lambda, Cloudwatch, and GraphQL/AppSync. Serverless computing came a long way.


Agree that serverless is a buzzword here, just as data science and machine learning have now become. It is about taming greater complexity at lower cost, and at increasingly more granular levels.


Exactly. We were able to support millions of new devices without infrastructure involvement.


Can you elaborate on Amplify? Is it really that good? It didn't take me terribly long to set up gitlab-ci with ECS and later Fargate; both of these feel more appropriate for web-serving apps.

I may see it in a full-JS app, but I still can't find a good fit for a JS-based backend. I've recently been exploring alternatives to Django for API backends and seriously considering a JS-based framework. I have yet to find one that is all three of: good, simple, in typescript. TypeORM looks excellent for the ORM side but there's still the matter of writing APIs; anything I've looked at (Express, Koa…) is atrociously repetitive compared to Django REST Framework -- NestJS is the best I've found, and it's still miles away.


SSR is going to do wonders for page load times on the internet as it finally gets popular via React/Vue. I hope it's the future for all of these heavy-weight user-facing JS apps.


SSR is the future? What has PHP been doing for 15+ years?


I'm talking about Next.js/Nuxt.js style JS front-ends replacing exactly that plus JS heavy frontends like Angular and SPA react apps which was the last decade's modus operandi.

The way SSR hooks React/Vue into these JS apps "hydrating" them after loading prerendered component based views...to make them interactive without losing any performance compared to static HTML, is unique and extremely powerful, which most people don't understand until they do it. It really is the future of frontend development.

SSR combined with async loaded chunked bundles of components is far more than prerendering some server side Web apps templating library with full HTTP requests in between. All the power of a full fledged SPA but with none of the performance or SEO downsides with automatic offline + service worker caching. It's great for the webs future.

https://ssr.vuejs.org/


Yo dawg I heard you like job security, so we put a program in your program, now you you can render while you render

More seriously, I do understand the difference, but disagree with the whole approach in 95% of cases


Even the Haskell people are adopting SSR’d JS-heavy frontends (Miso, Purescript, etc) for their web apps. That’s when you know it’s mainstream. Good luck with PHP!


SSR is the default for the web since it started.


Why do you say that nodejs is completely insecure by design and how does gcp or aws mitigate those security concerns?


Because you will inevitably have hundreds/thousands of dependencies, controlled by at least as many people, anyone of which could inject code to backdoor your server.

A supply chain attack will sooner or later be the cause of a major incident.


It's the same for any other language . With java with c++, dot net, PHP and even with Erlang. None of them force you to use governed central repositories. And that's a good thing.


The scale is on a different level however. Your average node project will have 10/100x as many dependencies compared to other languages. Too many to conceivably check. Also due to how dynamic the language is, I think it is way easier to hide something.


The V8 runtime itself is pretty secure. However every npm package has total access to your filesystem and network i/o. This is by design, the author of node himself has apologized for it and admitted that nothing can be done now because it'd basically break the internet. This means any package ( i.e. eslint), dependency, anything that has code from just one malicious contributor can grab away all your API keys, ssh keys(if you still use those), environment variables, crypto wallets of your users( this has actually happened a few times now at scale).

With something like aws-amplify you just go on their site and put your environment variables there, instead of keeping them on your own machine.

Now you don't have to worry about using sketchy docker images, or your junior devs using their work laptops on a malware infested gaming cafe while still running their localhost server.

Aws and gcp can afford to have way better internal security and regular pentesting of their containers and infrastructure, so now wrapping those protecting layers around node, express, etc... is their problem. You just push your code the production or testing branch and they handle all the provisioning, builds and deployments in 3-5minutes.


The npm dependency issue is a serious concern, but I'm not convinced that gcp or aws would mitigate the issue. If the problem is unaudited code that could be potentially compromised, gcp and aws will run that compromised code without protest.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: