Hacker News new | past | comments | ask | show | jobs | submit login
Go Is a Great Fit for Lambda (brandur.org)
187 points by craigkerstiens on Jan 17, 2018 | hide | past | favorite | 113 comments



I don't understand why AWS released Go support instead of binary support and I don't understand why they chose to rely on go's net/rpc.

The Go 1.x runtime will happily execute any binary, go, rust, c++. Events are sent over a TCP stream. That stream uses Go's net/rpc which encodes objects using Go's special [gobs] binary format.

If they had chosen a more common exchange format it would be much easier to support new runtimes. I haven't found any implementation of gobs outside of go. If you write one for a language you could use that language to write a native Lambdas.

[gobs]: https://golang.org/pkg/encoding/gob/


part of the "magic" of Lambda is making it easy to use from your programming language of choice. If they made a lower-level model, like, for example, "we start up your binary, and it needs to bind to port X, then we'll send HTTP requests to it and it should respond appropriately using the following JSON specification," it would take a lot more work to set up your Lambda function and get it working correctly than it does now. It's certainly possible for them to do it, but I don't think it fits in with their style.

Worst case, you can implement this interface yourself now easier than ever. Since golang is statically linked and has super fast startup times, you can build an interface where Lambda launches your golang process, and that process binds to a port and spawns a sub-process that implements the actual Lambda function. The golang process would implement the "socket" based interface for you.

People have been doing the above already, but all the existing runtimes, including Node.js, Java, Python, and .NET, have non-trivial startup times and memory usage compared to a golang binary, which should be able to start up in 10s of milliseconds and use less than 1 MB of memory while acting as a proxy.


They've already implemented a lower-level protocol "we fork/exec your binary, it needs to bind to port _LAMBDA_SERVER_PORT, and it must follow gos net/rpc protocol". aws provided a library, aws-lambda-go, that every go implementer must use.

Anyone can implement their own 'aws-lambda-go' library in what ever language they want, but the choice to use net/rpc and it's gobs format makes it more difficult than it could be. json is implemented in practically every language. there are binary formats with libraries is most languages if your want to be efficient.

Choosing a protocol other than net/rpc would 1. make it easier for AWS to support more runtimes 2. make it easier for the community to support custom runtimes. 3. be as easy to integrate with as aws-lambda-go is.


"Parse JSON from a request that arrived over HTTP" is not a major task in any language that's seen serious use since 2002. And if they had done this they would have support for a dozen better languages literally overnight.

The industry has way too many frameworks that should have been protocols.


I’m curious which dozen languages you find better than the ones supported by Lambda.


The point is that no one would have to be debating which languages should be supported by Lambda if they made their environment language-agnostic based on a protocol rather than a bunch of individual libraries.


Common Lisp. Until just now, Go. I hear great things about Java 9. TCL. Rebol. Smalltalk.

That's half a dozen (-ish); I'll let someone else complete it.


TCL, Smalltalk, Rebol??? So they could support all 5 users of these languages? Lol


.NET Core 2.0, until _very_ recently.


> If they made a lower-level model, like, for example, "we start up your binary, and it needs to bind to port X, then we'll send HTTP requests to it and it should respond appropriately using the following JSON specification," it would take a lot more work to set up your Lambda function and get it working correctly than it does now.

Man, the JSON specification must be annoying/complicated - because the first part of what you said, binding to port X and responding to HTTP, is like.. 1 line in Go. So the first part of what you said is extremely trivial, so the difficulty must lie in the second part?

I wonder what makes the JSON specification so difficult? From your wording, I assume the HTTP response is being sent to AWS, not the actual HTTP requester. Regardless, it must be damn clumsy to justify calling a simple HTTP response too complicated for developers.

Clearly I've never used Lambda though.


I'm only saying that it would take more effort and place more burden on a developer to figure out how to do all that in their programming language rather than just implement a function. I'm surprised this is a contested point.

Tell me in Java, without looking up any online resource, how to set up a FastCGI web application. Can you? Probably not. Now tell me how to write a static method in Java. If you have at least the equivalent Java experience of a freshman undergrad, you can do the latter but not the former.

If you think Lambda should use FastCGI as its common interface, then by all means create a project on github that deploys a Go-based FastCGI bridge to Lambda and makes it easy for you to bundle up your FastCGI application into a Lambda function. Problem solved! Now you just have to get people to use it.


Implementing a Go lambda is already more complicated than writing a function. It requires binding to a TCP port and receiving gob encoded values. AWS released a library that abstracts that into

    // main.go                                                                  
    package main                                                            

    import (                                                                
      "github.com/aws/aws-lambda-go/lambda"                                 
    )                                                                       

    func hello() (string, error) {                                          
      return "Hello ƛ!", nil                                                
    }                                                                       

    func main() {                                                           
      // Make the handler available for Remote Procedure Call by AWS Lambda 
      lambda.Start(hello)                                                   
    }

lambda.Start is the abstraction wrapper.


Why is there a crossbar in the ƛ? Shouldn't it rather be λ?


I think they have to divide by 2 pi to properly normalize it for a distributed filesystem, analogous to quantum's h-bar.

[Edit: this is a joke. I think the person to whom I'm replying has a valid point.]


I'm not sure, I copied that from the aws/aws-lambda-go repo https://github.com/aws/aws-lambda-go


> I'm only saying that it would take more effort and place more burden on a developer to figure out how to do all that in their programming language rather than just implement a function

But the average developer isn't going to do that, and won't need to. Within a week there would be more than one framework up on github to do the binding in their favourite language for them. Look at how many web frameworks there are.


> there would be more than one framework up on github

That's annoying, and something I hate about Javascript. There's no standard way to do it, so everyone has a different way of doing it.

I prefer that Lambda has sane, solid standard ways of doing things so if I have to move between projects I know what the heck is going on, rather than having to figure out which of the many homegrown ways of doing things people decided to use.


I cant see that being an issue here libraries for unofficial languages / runtimes would likely follow the same api as the official ones. There's precedent in the form of unofficial AWS SDKs...


I wasn't actually thinking about JavaScript at all. It applies to all languages that I have looked at.


I'm pretty sure anyone who programs Java for a living can start up Jetty via their favourite framework. It isn't as if lambdas are trivial to package either.


I don't understand why it can't all be done over standard in / standard out? Why bother with ports at all?


Yup. It used to be called CGI ... In the '90s.


Because then you’d need a (inefficient) fork/execing webserver, or some convoluted (error-prone) way of multiplexing multiple connections over one stream. The OS file descriptor model already handles this for you with listen() and accept().

CGI only works if you’re willing to take the hit of launching an entire process (good luck using any interpreted language, and even go needs some runtime initialization at startup) for every single web request.


Lambda/FaaS is more like FastCGI

https://en.wikipedia.org/wiki/FastCGI


Replying way late here, but FastCGI doesn't work by printing to stdout, it talks to a domain socket or an open TCP port. The grandparent post was asking "why can't lambda just use stdout?"

The magic of TCP or domain sockets is that calling accept() gives you a unique file descriptor that corresponds with a single client.

Just using stdout doesn't work that way, unless your entire process was launched to handle one request, hence CGI being very inefficient.


...or they could support FastCGI or CGI which would then have the "low barrier to entry" and "broad support" already handled..


I think there's a lot of secret sauce involved in order to make Lambdas perform well. They are definitely not just executing a binary every time a request comes in. You might give them a binary but that's not how it's executed.


that's the difference between CGI and FastCGI, though.


I agree. FastCGI or SCGI seem like they would be great solutions, instead of custom rpc formats.


Because it would make moving away from lambda much easier.


Wait isn't one of the selling points of Go that you can just deploy single statically compiled binaries.


I'm tired of Golang proponents suggesting that benefits every compiled, native language gets are unique to go in comparison to non-compiled or VM-targeted languages.

I wish lambda would do more to support more executable contexts instead of blessing individual languages.


I have to imagine there is a business reason for the slow adoption of languages/runtimes in Lambda. They're still only supporting Node v6.10 as well, which is far outdated. They should at least be keeping up with LTS for security sake.

Before releasing Go support, it felt like they're simply creating arbitrary friction to keep people on their platform. By expecting your code to export a handler, rather than listen on a port, that has a specific signature, you end up coupling yourself more to the AWS platform. But it also causes a ton of friction for adoption, since testing is more difficult than it should be.

Now with the release of Go, it feels even more that they're putting up these arbitrary gates without a rhyme or reason. How come Go can listen on a port, but other runtimes need to export a handler function? Why did they choose to rely on a Go-specific protocol instead of a common one?

It's frustrating, because I really want to use it (from an ops perspective), but I think my team is going to ditch the proof of concepts we've done because of the friction we've experienced. I'm sure there are valid engineering reasons for these choices, I just don't see them.


If your career consists of Ruby->Node.js->Go then static binaries are radical new technology.


> If your career consists of Ruby->Node.js->Go then static binaries are radical new technology.

Rather than offer what amounts to nothing more than an ad hominem, perhaps you’d care to explain why whatever you’re using is superior.

Your preference may be containers or bytecode bundles, but you’d be hard pressed to justify a major advantage and add a lot of complexity overhead for the privilege of using them.


> Rather than offer what amounts to nothing more than an ad hominem

Is this ad hominem? Why?

We can both agree to this and agree that Go's compile speed makes very large code bases possible (and that its cross-compile story is very compelling when your entire world revolves around writing on OSX machines and running in a linux docker container.

Lots of languages can deliver big static binaries. Hell, once upon a time this was considered a disadvantage and languages like Common Lisp and Smalltalk were totally raked over the coals for having an image & executable model!

> but you’d be hard pressed to justify a major advantage and add a lot of complexity overhead for the privilege of using them.

Why then is Go given this?

Because I could name 3 languages equally deserving right off the bat and one of them would be way less work: Kotlin, Swift and Haskell.


I disagree. Go's binaries are a 'radical new concept' in making them both the default and easy to use. Sure you can do it in C, sometimes, with the right libraries. But many won't even work statically.


You can do it in any programming language that has compilers to native code, it was the default linking model in OSes until the mid-90's and still is on the embedded space.

The special case of glibc, is the exception of majority of C compilers.


I'm well aware. But keep in mind probably the majority of us were not writing software in the early 90s(no offense intended). We grew up in a world with Java, Python, and Perl...none of which weren't even capable of doing so. Sure C and C++ were around too, but of course not as 'hot', neither encouraged static linking, and as mentioned, sometimes weren't even capable of doing so depending on library.


The static libraries are only popular right now because people who don't need static cross-compiled binaries are convinced they need a computer that is exactly like the computer people who make websites need.

It's a very odd state of affairs. In an age of docker, nix uberjars, ever more sophisticated user and system space filesystems it's really questionable what value they bring.

People say, "Simplicity" then they bling out their docker containers with multiple layers of processes and initialization anyways.

Go's tooling would have been great in a pre-Docker era. Now... I don't see the appeal.


Well, keep in mind that Go also predates Docker by a bit. Docker really did solve the whole dependency issue of dynamic languages/runtimes, but Go still has a possible advantage here - it can be built in a much smaller container..even SCRATCH. A minor convenience at best, but something.


Exactly. Given that they are clearly already doing this with Go (as it is literally native code that can do anything in that process), they should just release an official SDK for language vendors to integrate that handles the core binding of requests into the process, and then suddenly everything can support Lambda. (I hope and expect someone just does this using Go as a shim, but that is of course weirdly unofficial.)


Which is exactly what others have done before with Node when official Go support wasn't available: Use Node as a shim to launch a Go process in Lambda.


I haven't seen a language that didn't have a cycle of "everyone should use it for X" articles. Golang is just having its moment.


Let alone being amazed by compilation speeds that Turbo Pascal was already doing in MS-DOS, with a more feature rich language.


I think the go team does deserve credit for making compilation speed a priority when other languages of the same generation have tended not to.


Sure, but lets not pretend that is something they came up with, unique to Go toolchain.


Lots of languages have fantastic compile speed, they've just decided to have their compilers do a lot more work.

You can argue that fast compilation IS a feature and that new language options should be evaluated, but it's unfair to say it's "not a priority" even for languages like Scala.


> they've just decided to have their compilers do a lot more work

This. Compiling C++ would be a lot faster if you made the developer manually expand all the templates into tedious boilerplate before starting the compiler, but it's not a good tradeoff to make.


Why are we forgetting the amount of money spent developing fast JITs for Javascript here?


... with a richer type system, no less!!


I'm also finding Lambda is a very nice runtime for using the Golang toolchain.

There's a packaging challenge, but Go is also easier to install than most languages (unzip the linux build and set GOROOT correctly).

Then running `go lint && go vet && go build && go test` inside Lambda is fast and cheap compared to other options. No more time spent in a queue, downloading Docker images, initializing containers, etc.

I made a GitHub app if anyone wants to quickly try out building their Golang project on every build in a Lambda function.

https://www.mixable.net/docs/bios/


6 years is "unheard of" for language stability??

I struggle to think of a language where that hasn't been true..


There's a pretty similar discussion happening downthread:

https://news.ycombinator.com/item?id=16169803

The short version is that I don't think it's fair to compare Go's stability to that of a "barebones" language with an infinitesimal stdlib (e.g., C), because that's where most of the change is likely to happen. Go's already proven to be far more stable than most "batteries included" languages: think Python 2 vs. 3, Ruby's fairly considerable churn over the years, PHP 4 to 5, etc.

Obviously my designation on Go here is subjective, but it seems to me that it compares favorable to any other platform that comes to mind.


I think the point people are trying to make is that even your supposed unstable examples such as python don't do big, breaking changes on small time scales. python was originally created in 1991, python 2 came out in 2000, and python 3 came out in 2008. Even if we assume that python was only stable after the release of python3, it has been pretty stable for about a decade whereas Golang has NOT EVEN EXISTED for that long.


Eh. Close enough. Go is 9yrs old. I don't think the difference in 9 and 10 years deserves all caps.


Python 3 is older than Go, and PHP 5 is even older than either, at least as for Go version 1. C++ is fully backwards compatible, as is Java. I don't see this point. And, we'd first need to see if a hypothetical Go 2 will be backwards compatible, anyways.


C++ is only backwards compatible within the standardised versions. Before then, the library story was particularly unpleasant. Try building something written before C++98 on a modern compiler...

Of course, C++98 was 20 years ago so that's probably okay!


Good point - there we have our margin for exceptional language stability: ~20 years, set by both Java and C++. I guess Go v1 still needs a few more years to even be half that old... :-)


I would say that Java is the only similar platform in terms of API stability. Even Java had some breaking changes, but they were relatively minor and relatively early (e.g. assert). Before .NET core, C# was a bit of a mess, I remember switching an app to PCL and it broke damn near everything. That was only four years ago.


> I struggle to think of a language where that hasn't been true..

If you're ok extending that to "language + stdlib": Node.


I wonder what they'd think of ANSI C


Can someone provide good examples of why you would use a Lambda to respond to traffic instead of regular VPS? Is it just to save transfer costs inside the AWS network? Go can serve hundreds of thousands of connections a second on a $20/m DigialOcean/Linode/Scaleway/Volr server.


From my perspective (working at a large SaaS provider built on AWS), we don't use Lambda for serving client requests. Rather, we use it for internal glue between AWS services.

If what you actually want to do is relatively simple, eg, read a JSON message from an AWS service, like a Cloudwatch Alert, an autoscaling action, or an API call logged in Cloudtrail, you can easily wire those events up to Lambda functions for which you only need to code the bit that parses the JSON and does something in response. So then if the event happens once a day, once a month, or 50 times a second, you don't have to actually worry about calculating scale, except for cost purposes. But if it's a rare event, then Lambda comes in way cheaper than a $20/month server you have to maintain somehow.

Of course, you can build apps directly on Lambda, but there's a lot of tools you'll miss when you get around to going full scale production. But for simple glue to wire up actions to events, it's actually a pretty slick service.


100% agree. I know some people are building full on apps with lambda, but I'm just not a fan of that use case. Lambda is PERFECT for devops groups.

Need to keep a security group in sync with the IPs some 3rd party service publishes? cloudwatch cron + lambda function.

Need to do some weird scattering in response to s3 object created events? lambda function.

want to roll some of your own automation to manage and respond to health events with services you host? some event stream + stepfunctions + lambda functions.


It avoids ops work for bursty workloads: if your app sees consistent traffic it’s probably overkill, but if you have fluctuations where the work is, say, ranging between 0.5 and 20 servers’ worth you can avoid needing to setup that infrastructure yourself.

I use it for creating image derivatives on a project which doesn’t have dedicated servers for that task. Every S3 upload triggers a conversion task which saves the result to another bucket. The long-term average number of servers is close to zero since the content is released in batches but when a new batch arrives it’s reliably done in 5 minutes no matter how large it is, and I don’t have to manage autoscaling policies, server builds, etc. – just write a couple dozen lines of Python.

For a sufficiently large amount of work the cost tradeoff might go the other way but in my experience there are a LOT of tasks which are below the threshold where the savings in server time is greater than the savings in sysadmin time.


You don’t need to screw around with creating boxes, patching them, etc. if you’re tiny and only have one or two boxes running one or two services, then there’s not much point automating anything.

If you’ve got 100s of services or handle huge load spikes, why spend extra rev cycles on concerns like “do I have enough machines” or “are these things patched?” or even “how do I make sure my service stays up?”


For a pretty traditional CRUD web app or mostly static with a few dynamic parts website something like Lambda is probably not the right answer.

Where I think Lambda is interesting is in areas of high variability.

The sort of canonical example is image resizing. While you certainly can do that on the same machine as your web server it makes a lot more sense to put the images to a S3 bucket (keeps them off the server, out of versioning, etc.)

You can tie a Lambda to a S3 create file event. So when the file hits the bucket it automatically gets resized into all the appropriate sizes needed elsewhere on the site. This setup sidesteps a ton of potential issues, makes your app much more robust and is pretty easy to get going.


I use Lambdas for just about everything. The API for my package tracking app is a Lambda function, and I have another function that checks a few subreddits for specific keywords and emails the posts to me. I have another Lambda that checks my cellular data usage every half hour and sends me a text when I'm close to my data limit. The signup form for a Slack community I run is powered by Lambda as well. If I wanted to run each of these in their own isolated server, it'd be at minimum $4/service/month for a t2.nano instance, running me $200/year.

Thanks to Lambda's generous free tier, I've paid literally nothing for these services. I could handle 100x the current load and still pay nothing.


Not sure how tied you are to AWS for these things, but GCP has an f1-micro compute instance in their free forever list of services.


Say you have a pet project that does 500 to a 1000 API requests per day. If you stick that into a Lambda, you have 0 running cost are close to zero due to AWS free tier.


I wouldn't be so sure. I've got a simple email to http service running on AWS and while I'm sending no more than 50 emails to this service I still get billed cents. If I was thinking about running something like this in production I'd definitely use a cheap VPS.


Are you talking about a project that does 1000 AWS API requests a day?


Not the parent, but I assume he means an application that itself takes in 1000 requests per day.

At the free tier your options include, on a monthly basis:

* a single EC2 t2.nano box with 0.5gb memory and 1vcpu

* 1,000,000 requests (or 40,000 gb-seconds worth of requests) to your lambdas

Assuming you're taking in less than 1m requests monthly the lambdas will handle traffic spikes much more gracefully than the small VM will; as well, since it's not an either-or proposition this also means you still have a free VM you can do something with.


I like that Lambda can scale down to 0 and up to infinity without changing your architecture. However, a $20 droplet is probably a better value as I've spent many hours banging my head against AWS Lambda docs, Serverless framework incantations, alternative state storage, etc.


The Serverless framework is one of the most baffling and over engineered things I’ve seen in a long time. Instead of making thing easy it just a jungle of options and weird configs. No idea why it is so popular.


Your post lacks substance so it's hard to respond to (you criticize it as 'over engineered', not rally something I know how to respond to).

One obvious concrete benefit of serverless is not managing infrastructure.


You can do similar with e.g. elastic beanstalk. You still need a metric for autoscalers, but overall the scaling works well. Probably worse for high-burst sorts of traffic though.


Elastic Beanstalk requires more devops type stuff then lambda.


A use case is taking files from S3 and then doing something with them. It's much easier an efficient to set up a Lambda function that gets "pinged" by S3 every time a new file is uploaded than it is to poll S3 for new files.


You can execute Lambda on CDN(cloudfront) so that's really a deal breaker. Otherwise I wouldn't bother.


You might want to compare how Go works on App Engine. You deploy your Go app to App Engine by uploading its source code, and the minor version is specified in app.yaml.

It has taken some time for Google to support new versions of Go on App Engine, though they're promising to get better at it.


Despite their claimed improvements I believe they're still only supporting every other Go version. Fingers crossed for go1.10


Can somone tell me how are we supposed to deal with things like "postgres connection pools" in a serverless world.

Do we need to open a new connection to the DB on every call?


I found this reply from Amazon helpful.

TL;DR don't use a connection pool in Lambda

> Now I see that you mentioned in one of the github posts: "The Lambda does seem to use the pool, but if I make maybe 3 / 4 quick requests to the lambda, new connections are opened up despite there being all of those connections already."

> 3/4 quick requests would mean concurrent invocations, and concurrent invocations will execute in new containers. Containers are not aware of each other and cannot share state information so if an execution occurs in new container, it will open a new connection if there are none in this new container, despite there being existing connections from different containers. This can also cause your RDS instance to have too many open connections, because another container may have all the connections saturated, and if a new container is required to handle the load it may not be able to create the new RDS connection it needs to do it's work.

- https://github.com/sequelize/sequelize/issues/4938#issuecomm...


For most uses, database connection pooling (or, more accurately, keeping a connection per container), is the right practice to follow in Lambda. For many databases, creating a connection is expensive and slow, and amortizing those costs over multiple container uses (see https://aws.amazon.com/blogs/compute/container-reuse-in-lamb...) helps reduce latency and cost. Lambda allows you to configure a per-function concurrency limit, which you can use to make sure that the number of containers doesn't exceed your maximum database connections.


Lambdas are persistent so long as there is activity (ie it's "warm"). You can maintain a connection in a lambda and reuse it for subsequent calls. This isn't exactly the same as a connection pool, as you won't have multiplexing of any sort, but in practice you'll end up with 1 connection per concurrent request, which isn't too terrible, and if you keep the lambda warm there won't be any additional latency from having to connect.


Think of Lambda as one server per request, and pool (or not) accordingly. Each lambda container will only ever be serving one request at a time, but it may be reused to serve subsequent requests sequentially.

In that light, it might make sense to use a pool size of 1, if your application already does pooling. If you've written code to run multiple database queries simultaneously, you may want to adjust accordingly, but only allocate the number of connections you expect to use in that single request.

If you're connection constrained, you might want to look at pgBouncer, whether or not you're using Lambda. It increases your connection usage efficiency to the max irrespective of whether the connections are being used by lambda or a regular server.


You need to put a centralized pooler, such as pgbouncer, in front of your DB and have your lambdas connect to that.


Sounds like this person explains what you're asking for: http://blog.rowanudell.com/database-connections-in-lambda/


We are currently building the infrastructure for prismagraphql.com - a service that layers a GraphQL API over your existing databases. Lambda has some characteristics that would fit very well. We want to offer a service that:

- has a very low base cost - scales quickly to thousands of req/s

Operating this on lambda has some significant downsides:

- no request multiplexing - warm up overhead (lambda itself, our service, connections)

As we are operating at large scale we can invest in building infrastructure that supports this with more traditional AWS primitives.

In broad terms our approach is to:

- operate a t2.micro for each tenant - use burst capacity to bridge scale up periods - switch to c5 instances when a cluster reaches higher load.

This is a lot of operational overhead, but because we operate thousands of clusters we can fully automate this.

As a user of Prisma you can consume your database via a GraphQL API over http. This is a great benefit when developing serverless functions as Prisma essentially moves db connection pooling outside your serverless functions.


Java and C++ are virtually fully backwards compatible. Java 1.6, 10 years old, is still commercially supported. C++ compiler support depends, but can go back far, too. Golang only seems to support the latest two [minor] versions, however (so, 2 years, maybe). I don't see how Go's long-term support is in any way outstanding. And, we still need to see what happens when Go moves to version 2...


Go's lack of commercial support for previous versions is a non-issue. There's generally little reason why you can't just bump to the next minor version number. Go's 1.x promise means any code written for 1.1 will work on 1.9 and so on.


See The Go 1 Compatibility Promise[1]. Something programmed in Go 1.5 still works in Go 1.10

1. https://golang.org/doc/go1compat


What part of my statement makes you think I said anything to the contrary of that?


> Golang only seems to support the latest two [minor] versions, however (so, 2 years, maybe).

Go 1.0 was released 28 Mar 28 2012.

Go is on a six-month release cycle[1]. Focusing on the number of minor versions is not only the wrong thing to do (because different languages release updates at different rates) but it's also factually incorrect, because by schedule, 2 minor releases happen in a year.

1 https://github.com/golang/go/wiki/Go-Release-Cycle


The other nice thing about C++ is that you can specify which standard you want to use -std=c++11, etc. when compiling. I hope AWS Lambda supports C++ someday too.

Anyway, I like Go too and agree it's very stable and while it's not as formal as C++ (ISO standards) it's been very stable. Give it 10 years to know for certain.


6 years stability is all but unheard of? Maybe in the webdev world. Even there, really?


This is the stack Apex uses: https://apex.sh/#products


Six years for language stability is nothing compared to, for example, C89.


But for a fair comparison you'd need to include a lot of C libraries to match Go's standard library. How much have those libraries changed over the years?


(I wrote TFA.)

> But for a fair comparison you'd need to include a lot of C libraries to match Go's standard library. How much have those libraries changed over the years?

Yes, exactly. Something like C isn't a fair comparison because there's so little included in the core language. Think about everything that you get with Go's standard library: concurrency, HTTP (even HTTP2), TLS, JSON, XML, Zip encoding, Regexp, etc., and then consider C's APIs along with all the APIs of all the libraries you'd need in C to accomplish those tasks — all of that together would be a more fair comparison.

That might seem like apples and oranges, but for most real-world programs, people generally need a considerable subset of that functionality. C's stability is nothing to scoff at, but it's nowhere near as impressive of an achievement.


A fairer comparison might be python3. Even if you assume python has only been stable since python3, it is still older than Golang since Go came out in 2009 and python 3.0 was released in 2008.


Or Common Lisp (same language since 1994, and almost identical in 1984; still modern.)


[flagged]


> The "Go is one of the most actively developed projects in the world" line is also complete bullshit;

I think this is a pretty uncharitable interpretation of what I said. I didn't say "the most", but rather "one of the most". Kubernetes, Docker, and Chromium are all huge projects with similar or higher commit rates, but think about the other 99.99% of other open-source projects. In comparison to them, anything of Go's scale is huge.

Try not to focus on the individual wording though. If you like, just read it as "a very active project".


it seems the same things can be said about .net core as well.


Hmm .net core is not stable since a long time, and there is a lot of work to be done tbh.


.NET Framework has been pretty stable.

You can easily compile or run, any .NET 1.1 on .NET 4.7.1, that is 16 years of stability.


.net core not net framework.


[flagged]


Just pretend it is Go.


Wouldn’t that suffer from most of the issues Java currently does in a serverless environment? Or do you mean .NET Core with native compilation? I thought that wasn’t really used outside of UWP stuff.


Native compilation would count as a binary, and Lambda doesn't support arbitrary binaries yet.

There are arguments on this thread that they should have supported arbitrary binaries by using a generic RPC format, instead of th Go specific RPC format.


All Xamarin apps are natively compiled on iDevices, and Unity uses their own AOT compiler, IL2CPP.

There is ongoing work on CoreRT, the UNIX version of .NET Native.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: