As a project maintainer, I like it because it gives prospective users a quick and cheap way to test out service deployment, and it required nothing more from me than the already existing Dockerfile.
I guess the "safest" workflow would be to fork the repo before you click run but the article doesn't say how to handles repeat clicks... multiple environments or if it prompts you? Off to test!
First run from a fork:
This created the revision "cloud-run-hello-00001" of the Cloud Run service "cloud-run-hello"...
After changing the button's HTML to point to my repo on my fork:
This created the revision "cloud-run-hello-00002" of the Cloud Run service "cloud-run-hello"...
There appears to be more vendor lock in with this one requiring heroku specific files...
I'm a bit behind on best practices with Heroku so the heroku.yml config is new to me and it says it doesn't replace app.json. This is where I feel like Cloud Run supporting "plain" docker files or build packs is great. I wonder if Heroku will follow suit and make it a bit easier to deploy "just a container"?
Disclaimer: I've worked on this project.
For the life of me, I could not figure out to go from the button-based(apparently automatically inferred by Heroku?) deployment to a local Procfile-based deployment.
It was faster to rely on the alternatively provided Dockerfile to start a local deployment and hope that it was as up-to-date as the Heroku setup.
I find the idea of one-click deployments really appealing, but Heroku's vendor-lock-in-based implementation really turned me off their service.
So you are absolutely right that you can't simply go from an app.json file to a local development system in one step, as you (very likely) don't have the required infrastructure on your system.
The Procfile describing your application given some base installation (e.g. PHP 7.3), the app.json describing in this case nothing in particular?
Any application requires both a base environment and instructions for application deployment. And that is how the Dockerfile for rss-bridge is constructed, too.
So what besides vendor lock-in is the advantage of Heroku's approach?
The way I understood it, app.json is there to customize the environment for the application to run in – providing ENV variables, required addons (think databases, memcache, etc.) and pre/post deploy scripts. A sort of configuration-as-code if you will. It is explicitely not used to define the actual processes that should be started when the app is deployed, that's what the Procfile is for, as long as you're running in an environment that supports Procfiles.
I'm not really sure what to tell you regarding vendor lock-in. Apart from the app.json the repo itself looks completely vendor-unaware, as it is simply a PHP application. It doesn't seem to make many assumptions regarding your (local) infrastructure but rather assumes you know how to get a PHP application to run on your server/computer. The presence of the app.json file is just an affordance to those who would want to try out the app without having to configure anything themselves.
On the contrary, now that I think of it. I always found Heroku to be rather non-locking, as you can just take your code and run it somewhere else. You need to provide some additional tooling around your deployments yourself in those cases, but that's true for all PaaS providers, isn't it? Heroku Addons are nice features, but usually simply services provided by third parties that are made available using automatically generated ENV variables, which you could simply copy over to wherever else your app is running.
1. Follow the steps at https://deploy.azure.com. This one greases the wheels for linking from a GitHub repo README for code that can be deployed straight to an Azure web app - you can just link to the site and it gets the repo URL from the referer header and uses a premade template to deploy it. You can also provide your own templates with custom parameters.
2. Link to the Azure portal ARM template deployment wizard using a deeplink that preloads a template from a public URL. I've never seen any official documentation for this, but https://www.noelbundick.com/posts/deploying-arm-templates-fr... covers it.
That makes sense, of course, but if that step were gone (maybe by checking referrer?) it would be a lot slicker!
It's already on our list: https://github.com/GoogleCloudPlatform/cloud-run-button/issu...
Ideally you'd just link to a URL like deploy.cloud.run and we'll figure out the rest from Referer.
Now I just wait for websocket support to move my backend to cloud run as well
Can you do a conditional in the Dockerfile, e.g. download a remote file if using Docker to build with certain parameters?
ADD http://example.com ./file.txt
RUN cache is invalidated when the text (the whole RUN line) itself changes, which can be bad if you update a remote zip archive that you download with `RUN curl ...` and then expect the image to be updated after a simple `docker build`. This also goes for `RUN apk add ...` where the package might have received critical security updates but you're not getting them into your image because the cache is used.
COPY and ADD caches are invalidated when the hash of the actual file content that's added changes, which is usually what you want.
The only good use for ADD that I've found is for invalidating git clones from GitHub:
ADD https://api.github.com/repos/<user>/<repo>/git/refs/heads/<branch> version.json
RUN git clone --depth=1 --single-branch --branch=<branch> https://github.com/<user>/<repo>.git
If you're using Java, see this writeup for how different frameworks can affect startup time:
Similarly, if you're using Node or Python, you might want to see if any of your dependencies are enormous with lots of files and slow startup time -- you can check this locally by timing how long it takes to get to the initial listen() call and just print that wall time as you adjust dependencies.
If you're building Golang and you're seeing slow cold starts... I have no idea how you're doing that. For development, a lot of us on the open source http://knative.dev/ side are using Go http servers that take tens of milliseconds to start up, so there's probably some other initialization that's slowing you down.
Anyone has insight on how this compares to Heroku in terms of pricing/performance?
It might require a lot of conventions (which might not be worth it finally) but as a quick deploy and experiment solution, it’d be super awesome.
When there are no Dockerfiles, we use CNCF Buildpacks (https://buildpacks.io) similar to Heroku buildpacks.
Some apps without Dockerfile are currently deployable using this, as buildpacks detect the language/framework.
In fact Heroku is one of the founding contributors for Cloud Native Buildpacks (along with Pivotal, where I work).
Plus, I just finished making an auto deploy to AppEngine Workflow with GitHub Actions last night, so I can just push and auto deploy if all tests pass!
The project I created it for isn't public yet, but I made a gist that provides an example workflow and the steps required to make it work.
Disclaimer: I work on both Cloud Run and App Engine at Google.
Unlike something like Fargate, it supports automatic scaling of containers based on requests, so it will run zero containers if you get no traffic, and 100 containers if you get (for example) 1500 requests per sec. The fully-managed version has a pay-per-100ms of execution model, while the GKE-hosted version uses an existing GKE cluster you provide.
Don't think you could use Django on Cloud Run without issues, though, particularly with how it handles Sessions.
If you're doing any modern architecture with microservices using containers, you're primarily doing stateless things (even if it's your web frontend) and pushing the state off to somewhere like Redis/memcached/database.
You basically implied that web frontends don't run in a load-balanced multi-replica set up, which is not true.
Similarly from what you said one might think people don't deploy web frontends to Kubernetes (where containers come and go all the time as they're ephemeral, due to events like crashes, autoscaling), which is also not true.
If you’re writing anything that scales (i.e. has multiple replicas), then you actually store any significant state wrt logins/sessions on your app and you push it out to an external storage. Most web frameworks offer libraries or middleware that let you persist this "state" in external storage.
We use it in conjunction with Google PubSub and Cloud Storage to evaluate ML models in production and are really happy with it.
Yes, you can create a file named app.json that prompts the user for environment variables that are set on the deployed application. See https://github.com/GoogleCloudPlatform/cloud-run-button/
Has anyone tested how long this button actually takes?
Last time I checked its at least a  7 minute wait to deploy a "Hello World" to GCP...
I normally see 1-2 minute deployment times depending on the application.
(I work for GCP)
We use Firebase and GCP at our company and have had a good experience aside from slow deploys with App Engine. What parts do you get to work on?
It's holding up pretty well so far.
> What parts do you get to work on?
Not sure if this answers your question, but I also use Cloud SQL, Cloud Build, GKE, GCE, Stackdriver, Bigquery etc.