Hacker News new | comments | ask | show | jobs | submit login
Cloud Functions: Go 1.11 is now a supported language (cloud.google.com)
252 points by yannikyeo 30 days ago | hide | past | web | favorite | 104 comments

One thing that is not super clear from the blog post, and which may be a surpsise to some, is that Cloud Functions will require you to upload your source code. The compilation happens not on your end but on Google servers.

This is probably not a surprise for folks who have been using Python or Node for cloud functions, but it is something to think about.

Other FaaS providers don't have this requirement, and just take a Linux x86_64 binary.

Indeed. If you want to compile the binary yourself/don't want to upload source code, we have a serverless containers product currently available as an early preview (sign up at g.co/serverlesscontainers). This would allow you to compile your binary locally, write a simple Dockerfile and then build/deploy the resulting container.

This is great. It sounds like what people were hoping would come from Zeit Now, though in their latest major release they moved away from serverless docker deployments. Their community was quite upset at that decision I think. I will have to check this out!

One thing that is not super clear from the blog post, and which may be a surpsise to some, is that Cloud Functions will require you to upload your source code. The compilation happens not on your end but on Google servers.

This is probably not a surprise for folks who have been using Python or Node for cloud functions, but it is something to think about.

I've been thinking of using this approach for a golang MMO game SaaS which is really an aggregation of a bunch of FaaS lambdas tied to an account. What upload allows you to do, is to do a syntactic scan of all of the code, particularly the package imports and function calls. All of the above can be whitelisted, which adds security. Also, you can munge the function calls against a generated munged version of the available libraries, which also adds security. (In addition to using containers under the covers.)

Like Screeps? Even more niche? :)

That looks cool.

What I'm describing is for implementing MMOs, though.

Curious, wouldn’t you want the source code to compile on the final platform? (If not using docker) I’m not familiar with binary or lower level workings but I figured there would be certain benefits of compiling the same code on Linux vs Mac or Windows for example.

The Go compiler produces the same binary regardless of which host OS it is ran on. i.e., the OS you compile a Go program on doesn't matter (assuming pure Go code, not including C etc).

I think he was talking more about the machine architecture than the OS.

As of Go 1.5 the compiler works for any supported target architecture.

Thx for the info, I don’t use Go typically.

I was wondering about this, given the examples. AWS's Golang support for Lambda involves including their library and passing your function to a Lambda initializer function. Which I'm sure is all that's happening with this, but it's weird to hand off the Go source to Google. I wonder if they are multiplexing multiple functions in single binaries or using special Go toolchains. Or just don't want to lock into specific backends yet.

No, there's no real magic. Each function runs in its own process (which is inside a gVisor sandbox).

The toolchain is completely unmodified.

Disclaimer: I work at GCP and worked on this product.

Unlikely Golang’s init block will get executed in all the multiplexed serverless functions that are bundled in the one binary. Which adds a side effect

> AWS's Golang support for Lambda involves including their library and passing your function to a Lambda initializer function

AWS announced 'native' Golang support last year - since then there hasn't been a need to use a Python or node 'shim' to bootstrap the golang executable like this. You just upload your compiled binary to S3 and provide the path to AWS lambda

That would be my guess. Since this is all event driven code, why wouldn’t they bundle them for improved system utilization?

Because Go's namespacing / packaging model doesn't have the kinds of isolation required to prevent you from accessing the code and variables of the other packages you've been bundled with, compared to NodeJS, which does (VM feature).

This is a much better solution, I think. Uploading your source code is often easier than uploading your binaries. If, for example, you have a new contributor you don't yet trust creating a cloud function in Go and you know enough Go to review the code, but it's not convenient for you to compile it yourself, you can just upload the code yourself. You can't easily review the binary.

The fact that it's easier to do things this way just shows how immature the toolchain surrounding serverless development is. It shouldn't be difficult to simulate the serverless environment on a workstation, and yet somehow that's a huge gap, even for Lambda.

Fair enough, but I don't think it's wrong to want a PaaS/FaaS that automagically compiles things. You'll be hard pressed to convince anyone who genuinely likes this approach to switch over to having their IDE, developer tools, or CI do it.

the Cloud Functions emulator works well, and helps https://cloud.google.com/functions/docs/emulator

Then again, what's the point of controlling your own repository of binaries if their only goal in life is to be deployed somewhere else anyway ? It's much easier to just cut the middleman.

Your code review and deployment pipeline should be all automated from day zero. Setting up a CI pays off in the first couple of days in terms of time investment and massive returns on longer term mostly for avoiding human error and confusions that ensues.

CI is a continuous integration tool. There are inherent latencies to committing, pushing, deploying to AWS, running tests. Mature development tooling, if it's not REPL-based, will at least allow a local debugger to step through the code compiled/run locally, without these remote latencies.

Both tools are necessary for fast engineering, but in different contexts.

I didn't say that you don't need CI. I use CI regularly, and I think it's best for CI to be for testing, and to minimize the build artifacts except those used for test results.

A lot of developers have this preference for client side code. Since the early days heroku has been building the client side bundles for rails.

The parent’s point was that contributors shouldn’t have permissions to upload binaries; they only commit source code and your CI infrastructure compiles, releases, and deploys it.

If i understand correctly the binary is created from the source code right. Am i missing anything?

On AWS Lambda, which has had Go support for almost a year now, you upload the binary, which you create from the source code on your own system, using the go compiler. On Google Cloud Functions, you upload the source code and Google's servers build it.

I don't really understand what you are saying here .. It should not be difficult to run `go build` locally and then tell a tool to upload `foobar` instead of `main.go`.

Uploading source code is a dealbreaker for me because I do not trust Google or myself. Google probably has language in their terms that they may do whatever they want with anything you upload to them.

I also do not trust myself to properly configure security parameters around this with the risk of leaking source code.

Binaries are a great strength of Go. Too bad this platform doesn't use them. I will take a peek at the new Container platform that they are building, but that sounds more heavyweight, which probably comes with a higher price. (I have not seen any actual pricing)

can't your security params be read out of the binary in any case? i'm not sure how binaries offer more security.

His post is super unclear but I think he is concerned about someone getting their hands on his source code. In general, a compiled Go binary is not trivial to reverse engineer, so there’s some security through obscurity there.

Not a whole lot, though. A determined reverse engineer will likely be able to grab something close to your source code without much difficulty, assuming you haven't applied any wacky obfuscation.

Not 'someone'. Google.

Your secrets are not safe by embedding them in a binary. Compilation is not (necessarily, and at least so in the Go compiler's case) security...

Hey all - Seth from Google here. We're really excited to bring Go to Google Cloud Functions (GCF). Please try it out, give us feedback, and let us know if you have any questions!

As we all know, it took Google so long to get this. Can you overview on what are some of the technical challenges faced? Just curious. Thanks.

Cloud Functions PM here. I can give some insight:

* We've been running a private early access preview/alpha since last August.

* This was our first compiled language on Cloud Functions, which came with its own set of challenges.

* It took us a while to find the right approach for supporting dependencies (both Go Modules and vendoring are supported). Unlike other providers, when you deploy your source code, Cloud Functions will automatically install dependencies listed in your go.mod file.

* Our testers gave us a ton of feedback that helped us polish the developer experience -- we identified and fixed many rough edges related to error messages during deployment/build and errors at runtime. Serverless products can be a bit opaque (since you can't just ssh into a machine), so getting this right is important.

I'd like to say that there was one big, interesting challenge that we had to tackle. But the reality is that we worked through many small details that only became apparent during testing. We wanted to address these so that we could offer a high quality experience for our public beta launch. We owe our alpha testers major credit for helping us find and solve issues.

Speaking of testers -- if you have feedback on the runtime, we'd love to hear from you in our Cloud Functions Beta Tester group [1].

[1] https://cloud.google.com/functions/docs/getting-support#beta...

Is there a reason dependency management happens like this? We currently deploy Go Lambda functions in AWS with the help of the Serverless Framework and it just uploads the cross compiled Binary and not the whole project.

Why wouldn't the binary be the deployed unit in this case?

While it's a lot more work for them, and some may already have the infrastructure setup for deploying their own binary, I think having GCP handle the end-to-end there is more user friendly in general. I can quickly write a Cloud Function from any computer, without having to deal with setting up the tool chains. If you want to just run binaries, sounds like Cloud Functions isn't what you're looking for.

Maybe Google Native Client support?


No, it's not related to this. GOOS=linux and GOARCH=amd64 for GCF deployments. The sandboxing technology used is based on gVisor.

We're working on arbitrary binary deployments, you can sign up for that here: g.co/serverlesscontainers

Disclaimer: I work on GCP.

Thanks for the response. Would like to see more granular triggers (event types). Especially with Firebase. Also, would like to see more examples with firestore.

Which ones would you like to see? We're always looking for new use cases to support.

These are not specific to Go. firebase authentication -> user account from disabled to enabled. firebase authentication -> when a new phone number is associated. Firestore -> field level triggers. Right now we have only document level trigger.

> Firestore -> field level triggers. Right now we have only document level trigger.

For what it's worth, if you have an onUpdate trigger on a Firestore database, the event you receive has the before & after state of the change, which would include field-level changes: https://firebase.google.com/docs/functions/firestore-events#...

Is there a specific use-case that this doesn't handle for you?

Note: Work at GCP, not on Functions.

well, if I have onUpdate trigger, which means my function is going to be triggered for every update on the document (even for the fields that I don't care) You can say that triggering functions shouldn't cost you much, but thats not the way to do right. Correct me if I am wrong.

My use-case is simple. Want to get my CF triggered if the value of a particular field in a doc changes. Thanks.

Does this effort take you closer to the (supposed) goal of running arbitrary X86/ARM Linux binaries as cloud functions, or is that a completely different direction?

At the sandboxing level, it's already possible (you can upload any binary and fork/exec it from one of the supported languages). That's made possible by gVisor, which is the underlying sandbox technology used in GAE and GCF.

As for making that an actual product, we're working on that, too. Sign up for the alpha here:


Disclaimer: I work on GCP.

Do you know of any alphas running for App Engine? I'm interested in testing.

There's a beta for Go 1.11 on App Engine...


If you didn't mean Go, specifically, then the alpha I linked above might be up your alley.


I think AWS Lambda supports Go 1.x. Does GCF plan to stick to Go 1.11.x or broaden support to other Go versions in the future? What will happen to 1.11.x support when 1.12 comes out?

We use unpatched langauge toolchains/runtimes for GCF and second-generation GAE. That means a drastically reduced time to release for new language versions. (Maintaining patched runtimes was a significant burden to making new versions of Go available on GAE.)

For long-term support of various languages, see the GCP deprecation policy: https://cloud.google.com/terms/deprecation

Especially this sentence: "excluding [...] support for a programming language version that is no longer publicly supported by, or receiving security updates from, the organization maintaining that programming language"

So, that means we're committed to supporting the two most recent major version releases of Go, as per the Go release policy: https://github.com/golang/go/wiki/Go-Release-Cycle#release-m...

Disclaimer: I work on GCP and used to work on Go language releases.

We will include support for more runtimes and language versions in the future.

What's next on the roadmap for cloud functions? Some things I'm curious about is work related to cold starts and future run times (specifically Ruby) but happy to hear about anything.

While I can't share details publicly, I can confirm that we're actively working on new runtimes and performance improvements.

IIRC only the Node.js runtime has the ability to be emulated locally. Is this still the case? If I wanted to test my function locally, how would I go about doing that?

Use regular "go test" for unit testing. If you're using metadata inside your function (e.g., to extract an event ID) you can use metadata.NewContext to pass into your function while testing: https://godoc.org/cloud.google.com/go/functions/metadata#New...

If you just want to execute your function, create another package main and start an HTTP server, as you'd normally do.

There are some examples on this page: https://cloud.google.com/functions/docs/bestpractices/testin...

We're thinking about ways to improve the local development/testing story, and are keen to hear any ideas/feedback.

Disclaimer: I work at Google, and worked on this product.

Just do what Azure does with functions. Have used their VS Code local debug and "just worked" out of the box

This might be a very specific question, but any plans to include packages in the Go runtime for headless Chrome? I know it is already available in Node.js.

The base set of system packages should be the same between the two runtimes.

I work at GCP and on this project.

Hey Seth, I was wondering if you guys ever discussed the previous AWS vs GCP performance post that was on here a while back.

We worked with the original authors and they updated their original benchmarks. We are still compiling data and running our own benchmarks in other areas. Things were on pause for a bit as many Googlers took time off for the holidays to spend time with friends and family. I’ll check in on the team though. Thanks for the reminder!

Do you have a link to this please?

Awesome,thank you.

As an aside, and since there are some googlers here, the heading animation on the blog has some issues in Firefox. They are super annoying. Would be nice if google supported all browsers.


I’ll raise it with the team thanks!

I deleted the album because somebody kindly pointed out it might have revealed more that it should. Feel free to email me if you need any help reproducing the issue.

We've used GCF for some production tasks and it's worked pretty well for us.

Would like to see some more runtimes/languages. I'm hoping AWS' recent layers implementation has made this more of a focus at Google. I'm curious how the implementation of Go has affected the ease of integrating other languages.

> I'm curious how the implementation of Go has affected the ease of integrating other languages.

In some ways, it helps. You start to see similar issues arise and know what to look out for when you're launching a new runtime. In other ways, every language has its peculiarities and its own set of design considerations.

Launching/polishing a completely new language still takes a decent amount of work. Launching a new version of an existing language tends to be much quicker.

Although I've never used Lambda Layers. Do you think a similar approach is something that could be implemented?

Maybe even just something for compiled languages like Go, Crystal, etc that will just run the binary?

It's definitely something that we've discussed.

Would you consider running a container that could be run like Cloud Functions? This container could run the binary that you create. It's not something that we support today but I'm curious whether this would meet your needs.

> running a container that could be run like Cloud Functions

Does this mean we actually run the container ourselves on our GKE cluster or in a VM? Or do you mean a "container" runtime for Cloud Functions? Both would be interesting, but we'd prefer the latter since there would be less to manage. I'd be interested to see the performance of it.

If you're interested in something along the lines of the latter, you can sign up for an early access preview here: g.co/serverlesscontainers

> It's definitely something that we've discussed.

This is something Cloud Native Buildpacks (buildpacks.io) are intended to make easy. We hang out on buildpacks.slack.com, if you'd like to come pick our brains.

Given what we've seen with AppEngine they've been moving away from custom language toolchains and to a more containerized deployment approach.

You're right. If you look around in the logs for the deployment output, you'll see that both Cloud Functions and App Engine go through a container build step via Cloud Build.

Nice! Go 1.11, especially modules, is a great step forward.

Actually tried to use it in production these last few months. The core of the experience is awesome, didn't run into any issues at all. The problem is that all the tooling like errcheck doesn't support go modules yet. So we ended up going back to the dep package manager. Will try again in a couple of months. Here's a ticket about the issue: https://github.com/golang/go/issues/24661

I noticed that, too. The only actual third-party tooling we use is realize [1] which currently doesn’t work outside of GOPATH. Luckily, we work with Docker anyway so our code is mounted inside GOPATH and we can use modules commands on the host.

1: https://github.com/oxequa/realize

Isn't this all about to be replaced by serverless containers? With FaaS deployments just taking docker containers or building your code into one when you upload?

Why the continued work on a one-off runtime and http response spec instead?

> Isn't this all about to be replaced by serverless containers?

Maybe soon, maybe later, maybe never. But it makes sense for a company of Google's size to keep working on the current things they have. Otherwise you wind up with an Osborne Effect and/or frustrated customers.

It's simpler to just write code, rather than have to try to set up a container.

>> building your code into one when you upload

There's already a publish step, so it can be automatic.

If you need to setup some stand alone functions, this is a great way to do it. I run quite a few utility functions like this, they are simple to write and deploy.

If all I want is an HTTP endpoint somewhere, I'm actually not sure why I'd want a serverless container when a stand alone .js file does the trick.

Single functionality HTTP handlers in Go, can now be served on HA. Nice.

Trying to make sense of it from a business point of view. Could be useful for IoT applications. Event-driven, lightweight, scalable.

Worth the wait!

Since it's compiled (compared to Python or JS), is this faster/lighter?

Any performance difference will be similar to if you were running on any other platform.

So, if for your particular application, Go has a faster startup/cold start time than Python or Node, then yes, it'll be faster, overall.

Very difficult to speak about performance in general terms, though.

I will say that there is no kind of wrapper library or wrapper runtime like some other serverless platforms might depend on.

Better late than never.

You're not wrong. Cloud Functions Product Manager here. We've been running a private early access preview since we first showed this at GopherCon in August. I'm really happy to see this first step but we have a lot of work ahead of us.

Meta comment: why did the HN title change to "Go 1.11 is now a supported language"?

This post is specifically about Google Cloud Functions, not Go 1.11 in general (which is over 6 months old).

edit: thanks, mods!

Yeah- mods, the Cloud Functions context was useful. Can you change it back.

I assumed as much, but the title could definitely be better.

> func F(w http.ResponseWriter, r *http.Request)

And me thinking the days of C compilers with 10 character limit for identifiers on the symbol table was long gone.

You can name the function anything you’d like. When you deploy the function, you specify the entry point function name. Many of our alpha testers used “F” for function because it’s quick and easy, but that’s not a requirement :)


Thank you for the feedback. We do have similar examples for the other programming languages because that’s what the community uses most frequently.

You’re still free to name the entry function whatever you prefer to match the styles and engineering practices of your organization :)

For better or worse, it's convention to use really short (as in, single character when possible) variable names in Go.


Disclaimer: I work at Google, but not on GCF or Go.

I guess I forgot about those "best practices".

Thanks for the heads up.

I don't think "Variable names in Go should be short rather than long" is an accurate description of best practice. The wiki should probably be changed.

The second paragraph there is more in line with best practice...

"The basic rule: the further from its declaration that a name is used, the more descriptive the name must be. For a method receiver, one or two letters is sufficient. Common variables such as loop indices and readers can be a single letter (i, r). More unusual things and global variables need more descriptive names."

In the case of a Cloud Function, a short name like "F" is probably fine. While it needs to be exported to get deployed properly, it isn't a library function, is very unlikely to be imported by other packages, so it's really more like "func main" than anything.

Most people will have a single Cloud Function in a package, and I don't think you'd reasonably have f/f.go with a function named "F".

Also, in this case, the “f(w, r)” pattern is pervasive in Go web code. Anyone who has written web code in Go will instantly understand what it means and how to use it.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact