Hacker News new | past | comments | ask | show | jobs | submit login
GitHub: Widespread Injection Vulnerabilities in Actions (chromium.org)
624 points by looperhacks 21 days ago | hide | past | favorite | 145 comments

Interpreting text for commands has an interest parallel to a major World of Warcraft bug. It turns out that you could run arbitrary lua code in your chat terminal by using the "/run ..." command.

Enterprising thieves figured out that one of the things you could do was redefine functions, in particular the 'RemoveExtraSpaces' function that gets run against every chat line. If you set RemoveExtraSpaces=RunScript you've told WoW to execute every chat message displayed in your terminal.

Attackers just have to trick you into writing "/run RemoveExtraSpaces=RunScript" into your terminal, and after that all they had to do was chat code to you and it would execute. They basically get full control of anything lua can do in your game -- including sending messages to your friends asking them to also run the attack command.

Kinda reminds me of the "don't type anything into the browser console" warnings that sites like facebook etc. emit so that people don't type in commands strangers gave them, believing it allows them "hack" others while in reality it only hacks themselves.

Reminds me of when I was a 12-year-old on IRC typing "/say /con/con" into really busy channels and watching nearly everyone disconnect, and laughing like a moronic teenager with my friends

We used to do this stuff on text MUDs. People would make triggers to do certain things based upon text the MUD sent them. But a lot of people failed to properly "anchor" their triggers (you anchored them with '^' like in the regex world).

So you would send people a message with a series of commands to try to exploit commonly used triggers in order to make people give you all their items.

Ahh, fond memories of writing code and triggers for MUDs. Frustrating at times but helped for me to learn programing in an interactive and iterative manner.

That is amazing. Essentially neurolinguistic programming a la Snowcrash.

DCC SEND +++ATH0 had the same effect for modem users in the day.

For Hayes modems, this wouldn't (well, shouldn't) have worked, since it requires a one second pause between the last '+' and the command. It's part of Hayes' patent IIRC. I believe it was also only processed on the transmitted traffic, not the received.

What does a competing and not-wanting-to-infringe-a-patent modem company do? They just don't implement the delay, and ignore the direction of the traffic while they're at it. Problem solved! (head-to-desk, repeatedly)

> I believe it was also only processed on the transmitted traffic, not the received.

I believe you're correct, although it's been ~25 years...

Which explains why the command above isn't exactly correct, IIRC. What you had to do was send a CTCP ECHO containing the "+++ATH0" to a channel -- or an individual user, but sending it to an entire channel full of people was a helluva lot more fun!

Anyways, I don't recall the exact command nowadays but it was

  /CTCP #channel ECHO Goodbye +++ATH0
or something very similar.

As long as the "victim's" IRC client wasn't ignoring those CTCP commands, it would "echo" the text and, as expected, command their modem to hang up.

Perhaps it shouldn't have worked... but I promise you that it absolutely did.

(Also, you could do the same thing by putting the "command" into the payload of an (ICMP) "ping" -- and that's a completely different thing from the "ping of death" that was discovered in that same era.)



Direction was not ignored. People would be disconnected when their client replied to the DCC request or CTCP ping.

I'm not sure the vulnerable modems would ignore the delay because of patents. Would love if you have any reference. I remember a Smart Link softmodem respected the delay, but an Intel softmodem did not. Always assumed it was an implementation bug on the part of the Intel one.

The US Robotics external modems (from 14.4k to 56k) I had certainly required the pause. You needed a pause before AND after +++, then ATH0 with a CR.

Yep, just as my Smartlink softmodem.

Ping with embedded +++ATH worked too on some modems!

Of course you needed to have the nous to do this in the first place and a IRC server that showed users host names/ip adresses- I fear that I’m showing my age !

IIRC it worked against Microsoft WebTV set-top boxes. Some kids got in a lot of trouble by causing a bunch of them to hang up and dial 911 or something.

Some speech synthesizers for blind people, particularly old versions of Eloquence, crashed when they tried to say specific pieces of text. They often took down the screen reader with them, so the only thing you could do was a hard reset. An example of such a crashword is the mistyping of wednesday, where n is replaced with h.

People were posting those crashwords in chats, on MUDs, or even as usernames. Some places where a lot of blind people hung out even contained specific filters for those.

There were also programs that didn't escape SAPI5 tags properly, so, by posting particularly clever bits of XML, you could change parameters like the pitch, speech rate, volume, or even the currently active voice, and make users, particularly younger users, scared of weird ghosts in their computers.

This is absolutely fascinating and I'd love to read more.

That is super interesting. When did this happen? What is the worst that did happen through this?

It's astonishing nobody found this till 2016, given that people were stealing WoW gold with keyloggers installed though 0 day flash vulns a few years before that.

I think it would be a little harder to make monetary gains off of this. You still needed to be in physical proximity to trade. I'm not sure if you had the ability to script over the mailbox and change recipients, that could have worked perhaps.

You'd get much more bang for the buck getting access to the whole account.

I'm not sure. I do know that you could, for instance, have your gold sent to a scammer with little to no recourse. It may be possible that this bug still exists because it's one of those "bugs that is also a feature".

Blizzard's response was to modify /run to pop up a dialog first warning you that /run is used by scammers and requires you to confirm that you know what you're doing.

I surprised that Project Zero waited the last 30 days for disclosure since Github themselves publicly announced at 1. October that the whole searching for magic strings in stdout as RPC channel is broken by design.

When I saw the announcement month ago I assumed that Github fully disabled the reading commands from stdout mechanism but looking at current docs it seems that they disabled only 5 most easily exploitable out of 14 commands which can be called using stdout. This seems like typical "we fixed your POC doesn't work anymore" but the main cause remains. As can be sometimes seen with hardware vendors "we fixed hard-coded backdoor password, now there is new hard-coded backdoor password".

I'm not sure. When using `::set-output name=spam::eggs`, you seem to have to refer to it via `steps.foo.outputs.spam`. So if you keep steps that set outputs very simple, and/or the set-output echo is the last command of the step, then it should mitigate that?

Is the general consensus that `$GITHUB_ENV` offers less attack surface, despite not being scoped to a step?

Some guidance on this would be great from Github's side. Also appreciated: a way of turning off magic stdout strings in the YAML, and a way to set outputs via a more robust mechanism, and/or to scope which steps can modify which environmental variables. I get not wanting to break existing CI runs, but giving the people who want it an option to harden the CI proactively would be great. I don't really care if this would break some existing actions, either - seems fine to do for opt-in?

Most useful comment so far in this thread.

I’ve been using $GIT_ENV for setting environment variables, and though I read about the stdout commands I never got to using them.

It’s one of those “but of course” moments realising that any cli tool can trigger commands with it’s stdout.

As you mentioned setting outputs is rather crucial, I guess using stdout is the only way todo that.

There are definitely a lot of security things to consider when using actions. I read the code of any community actions I use but, especially after reading this alert, I feel like there are a lot of places where malicious code could cause damage.

It seems like most people here don't see what the root cause here is. System and user level processes with high order 'actions' for idempotent handling of secure channel operations (session negotiation, out-of-band data) are vulnerable to xss, side-channel attack for 'actions' that use stdout on/at execution of low order operations handling packed binary data that upon execution may contain shell code or encoded instructions.

they will patch a PoC, but how will we know the root cause is patched? the service parameters can be fuzzed by someone to check for injection (if they can).

tldr; I don't know what the root cause is.

I think we’re getting pretty close to understanding the root cause.

Tools that you run in actions can trigger the stdout commands (setting env variables and a few others) via printing to the console.

To do some damage though they need a few other things to happen. The example given is a node process that is run after the attacker has set an env variable with arbitrary code, which node reads and executes.

Additionally since some actions print the contents of PRs and Issues to the console, these can effectively do the same thing. So the attacker doesn’t need to control a cli tool stdout output, they just need to create a PR/Issue with the malicious code. If an automated workflow writes the PR/Issue to stdout, then the malicious code in the PR/Issue is executed.

Since it’s possible for any action to run commands by writing to stdout, you have to trust that all the actions you use don’t print malicious stuff to stdout, and they themselves might not be malicious, but they might print stupid stuff that contains untrusted input like the contents of PRs/Issues.

It’s not made any easier by the fact that the workflows are in the repo for the attacker to read and study, so they can see exactly what will happen before their attack.

This whole method of triggering commands via stdout is very dodgy, and there is no way to either turn it off, or narrow the scope of what is executed, and it’s not easy to see what happened after the fact either.

At the minute you just have to be very diligent at checking that your actions aren’t printing stupid stuff to stdout that could cause execution of malicious code later in your workflow.

If it’s only the setting of env variables that causes issues, then they might be able to just turn that off because there is another safer mechanism (writing to $GITHUB_ENV) to set env variables.

There might very well be other attack surfaces though that I haven’t thought of.

xdr was developed by sun, i don't think that is the issue here. I think it is the service that rpc doesn't recognize. i'm assuming in this case it is rails under ruby or something using ssh or ssl that require secure and non-blocking calls to low order operations.

The issue is in the handling of the connection. This is the issue i'm having right now with [the] wrapping [of] secure binary streams of [un]trusted data with authentic transport control.

eg tls 1_2 over ssl

Ninety days.

"Text as a cross-application data transfer format" and the Unix Philosophy of "chains of small programs that each do a single thing well" are what lead to exploits like these.

I don't know what the proper system design is, but it isn't these two things. Disagree all you like.

I disagree. This bug is about trying to embed control commands into an arbitrary non-structured data stream. This is a bad idea, no matter if it's text or binary. The proper way to do this would be like the report says: open a separate channel from control commands. This is just like any other injection attack and not related to the unix philosophy.

It's the "non-structured" part of what you said that I tried, and failed, to point out with my comment.

You don't need structure for secure control though, as long as the control channel is isolated. It's easier to make an insecure program when dealing with loosely described text streams, but it's by no means inevitable.

Doesn't have to be text.

That's absolutely not what leads to exploits like these. What leads to exploits like these is "Interpreting output from untrusted code (i.e. output from arbitrary commands) in a manner which allows execution of protected commands (i.e. setting env vars)". Which is like the primary reason for most security vulnerabilities.

good point. i think 'getting env vars' is the bigger issue here.

I think switching text for a structured format is the solution. I'd go for some encoding of s-expressions, since it's the simplest format that supports reliable splitting into atoms and nesting.

I've often heard HNers state that Project Zero is unwilling/uninterested in protecting the interests of the vulnerable host, but this seems like an excellent case study where Github, the host, just doesn't think this is a big deal. I think that applies to any host who doesn't patch their vulnerabilities - if they thought it was a big deal, they'd do something about it. Actions speak.

Quoting from the post:

  2020-07-21 Report sent to Github and triaged by Github security team. Disclosure date is set for 2020-10-18.

  2020-08-13 Project Zero requests a status update.

  2020-08-21 Github say that they are are still working on a fix and a deprecation plan

  2020-10-01  Github issued an advisory[1] deprecating the vulnerable commands, assigned CVE-2020-15228, and asked users to patch and update their workflows.

  2020-10-12 Project Zero reach out and proactively mentions that a grace period is available if Github wants more time to disable the vulnerable commands.

  2020-10-16 Github requested the additional 14 day grace period, with the hope of disabling the vulnerable commands after 2020-10-19.

  2020-10-16 Project Zero grants grace period, new disclosure date is 2020-11-02.

  2020-10-28 Project Zero reaches out, noting the deadline expires next week. No response is received.

  2020-10-30 Due to no response and the deadline closing in, Project Zero reaches out to other informal Github contacts. The response is that the issue is considered fixed and that we are clear to go public on 2020-11-02 as planned.

  2020-11-01 Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.

  2020-11-02 Project Zero responds that there is no option to further extend the deadline as this is day 104 (90 days + 14 day grace extension) and that the disclosure will be today.

I mean, in this case what difference does the Project Zero disclosure make? I've been getting alerts about my Actions build, GitHub has released the functionality necessary to implement the change, and GitHub has had a website explaining the vulnerability for weeks now. This isn't a case where disclosure actually disclosed anything new.

Edit: For reference, this was announced on the GitHub blog a month ago: https://github.blog/changelog/2020-10-01-github-actions-depr...

I would say it seems Github thought it would make a difference, based on their request for 48 more hours.

Why do you think it won't make a difference to them in light of them specifically requesting 48 more hours?

Because of that 2nd to last point:

> Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.

The issue isn't really with GitHub not making the fix available, but the issue is that if they turn it off now they will break legions of builds. I think the Project Zero disclosure issue for them then is more for pointy haired manager appearances ("Oh no! Github has a disclosed Project Zero vuln!") than for any real difference in security.

Then break the legion of builds in the name of security.

That seems along the lines of "I can just turn off the ability to log in to prevent account hacking!" level of security thinking.

If your choices are "disable all logins" or "anybody can log into my bank account and make whatever transfers they want", the correct choice is the former. (Obviously I would prefer a third option, where the company actually fixed the login bug sometime during the 104-day lead-up, but that's not the point.)

For some accounts you do exactly that if you have to.

What if we could have both, just by sending an email? Hit compose underlying!

Well there could hurt their pride to miss the deadline. They might have an internal metric for this too.

Seem a bit inflexible from Project Zero, given that Github has already disclosed it anyway. Why didnt they disclose earlier even? Seems like program part could need some updates, a few things don't make sense here.

> 2020-10-30 Due to no response and the deadline closing in, Project Zero reaches out to other informal Github contacts. The response is that the issue is considered fixed and that we are clear to go public on 2020-11-02 as planned.

> 2020-11-01 Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.

Suggests to me that GitHub has no clue whether it's resolved or not. Project Zero were following GitHub's lead. Asked them if the problem was resolved or not, didn't get anything, had to reach out to internal contacts, and then got a conflicting message saying "NO WAIT!"

To be fair thats a one sided view of the issue. Will be interesting to see Githubs take.

I think it kinda doesn't matter. Project Zero had already given them an extra 14 days; GH should have made better use of it. If you're constantly making exceptions to your rules for people, the rules end up being more flimsy and less respected.

Regardless, GH had already disclosed this a month before, so their desire for 2 more days was unnecessary: the cat was already out of the bag, through their own actions.

Indeed, if that timeline is to be believed—and I have no reason to doubt it—it’s pretty damning. GitHub just kept putting it off. When they asked to delay the disclosure indefinitely at the very last second, of course they were told that would be unacceptable.

I dunno. I've worked in many places that ABSOLUTELY had the desire, willingness, money, time, and ability to do something, and were still unable to make that thing happen, for reasons that can't be spoken about.

While it is true that if an entire organization wants something done that it will happen, it is not true that inaction by an organization always signifies unwillingness or inability to act.

It's easy to sit on the sidelines and throw tomatoes at the parties involved. It is another matter entirely when you are hamstrung by internal process or a specific power-tripping and ignorant person in management.

>While it is true that if an entire organization wants something done that it will happen, it is not true that inaction by an organization always signifies unwillingness or inability. It's easy to sit on the sidelines and throw tomatoes at the parties involved. It is another matter entirely when you are hamstrung by internal process or a specific power-tripping and ignorant person in management.

The thing is, major releases and public shaming is pretty much the only thing that has a (not guaranteed) chance of moving these processes or stubborn people. Before that they learn to just delay indefinately.

I agree. Just be careful when you assign motive to an entire organization when it could be just a single person inside being an ass.

Repeated shaming of this nature should weed those people out, but it won't always have that effect.

If this is being held by a single person (or small number of people), then the problem is really the organization. This should not happen with security issues.

> I agree. Just be careful when you assign motive to an entire organization when it could be just a single person inside being an ass.

Preventing motivations of a single person inside an organization from tanking its products and direction is one of the main jobs for organizational management. Good companies are resistant to fiefdom building and having managers damage their reputation for personal gain.

When people ascribe a motive to an organization nobody thinks that every single person in the organization has that motive. Rather what it means is that the organization's structure and members cause the organization to act in accordance with that motive.

Indeed. And whether it's one person, a group of people, or a departmental "culture", is meaningless. The org allows it to happen, CEO downward, through all the ranks.

It is entirely the corp/org's fault, and yes, that reputation can rub off on you.

I think you're right that this is often the case, and my heartfelt sympathies go to any engineer who struggles to do the right thing in that situation.

But in what sense are these kinds of internal politics NOT "unwillingness or inability", at least at the organizational level?

Organizations where managers are allowed to play political games at the expense of users' security are not to be trusted with any sensitive data, and need to be publicly called out and shamed if there's any hope to effect change.

> But in what sense are these kinds of internal politics NOT "unwillingness or inability", at least at the organizational level?

An example from my past should demonstrate...

A single manager in my past simultaneously:

A) told higher management that something was not possible, and

B) told the org under him that management didn't want that same thing done, and

C) convinced his peers to let him manage this personal pet project alone.

The only person that knew the truth at the time was the lone person who didn't want it done. Result: it didn't happen.

This is extraordinarily common, in my experience, though only this particular person could do it this well.

The entire organization wanted something done except one guy who convinced everyone to let him have the power to stop it without anyone understanding what they were letting him do.

You may argue that this is an organization being unwilling to act via its own structure, and I would counter that this person was very skilled at this type of deception, and succeeded at this type of thing despite the organization above, below, and around him all being pretty much standard.

Upper management's job is to manage. Being "convinced" of something, means they did their job, made a judgement and management call, but got it wrong.

That's their fault too.

It's also the fault of other employees, who did not speak up. You're painting a picture of everyone being incapable of judging things on their own, because someone talked them out of what they knew to be correct.

Who's fault is that?

The org put this guy in charge. He decided to not fix something. He represents the org on this issue. Therefore the org doesn't want it.

Ok so the guy deserves zero blame? Really? 100% org responsibility, got it.

If that were the case, I feel like Github should have responded on 10-28, when Project Zero reached out mentioning that the deadline was approaching. If they asked for another extension then rather than just not responding, maybe Project Zero would have considered being more accommodating?

Even if there exists some really well intentioned hamstrung individuals within the organization, I think it's desirable to highlight and potentially shame institution as a whole that aren't able to effectively mobilize.

What's the viable alternative?

> It is another matter entirely when you are hamstrung by internal process or a specific power-tripping and ignorant person in management.

That still sounds like unwillingness or inability to me.

an organization is not a human, despite what people say.

A single human may be absolutely willing to act, but the organization won't - i.e. they will not be allowed to act by their bosses. Everything else is nitpicking and blame shifting. If someone's willing to do the job "but they have too many other things to do" then they're not willing to do the job. If someone's boss or with authority over them is unwilling to prioritize the work then they're unwilling to do the work as well.

You're assuming the goal of Project Zero is to take an interest or willingness to protect vulnerable hosts. That is not the goal of Google Project Zero. The goal is to find vulnerabilities that impact the stock price of Alphabet competitors, usually Microsoft or Apple.

There's also more "good faith" take on that:

The software of their competitors is used by billions of people, thus finding and reporting vulns in that software is going to help billions of people by fixing that problem.

If that was actually the case then, just by the numbers, wouldn't they spend all their time fuzzing Android or GSuit or their own codebases as well? Or at least release the reports of them with the same viritol?

It's like Toyota buying a Ford and finding out that if you rear end it, it will explode. So Toyota publishes their finding. Then weeks later a 3rd party finds the prius accelerates by itself.

Maybe if Toyota wasn't so busy trying to screw Ford they would have known about their own glaring issues.

We can't tell whether or not Google lives in a glass house because they keep it proprietary. Therefore we need to assume it's no better than what they bully others for. Nobody handed Google a crown and made them ruler of everyone's code. They just kinda built themselves a castle and started cutting off heads.

> I've often heard HNers state that Project Zero is unwilling/uninterested in protecting the interests of the vulnerable host

I think that's usually people that just vastly prioritize the interests of the vulnerable host compared to the interests of all the users.

To my eyes it's obvious there's a trade-off to be made between protecting the company and protecting the people affected. Too little time is detrimental to one party (sometimes both to some degree), while too much time is definitely detrimental to the other. It's a fine line to walk, but the best thing you can do is be consistent. Nobody is served well if the companies/projects in question work under the assumption that more time will be given because it has to others in the past, and then it isn't, as the company may not have correctly prioritized the fix, and then the public is left vulnerable as well. But to not be consistent leads to abuses of the system where things just don't get fixed.

At this point, I don't think anyone can accuse Project Zero of being inconsistent, and 90 days is a long time to get something fixed if you put the resources towards it that it needs. I have little sympathy for a company that mismanages this process at this point. For an open source project, there's always groups and lists you can go to and ask for help if it seems overwhelming for the project you have. Presumably if it's an important enough project some person or company that cares about it will donate some time. If nobody is willing then my guess is that the project's not that important to people.

There's also the possibility that the problem is so large or so fundamental that fixing it is a herculean task. Maybe in that case people are better off moving to an alternative if they care about the problem. Sometimes things are so bad the best choice is just to jump ship. It sucks for that company or project, but they have no right to my usage, but I do have a right to choose what I want.

I'm impressed github replied at all. I've never received a response on a github support ticket. I don't think they are even monitored.

I’ve sent dozens of support tickets over the decade, and can’t recall ever not receiving a response, including on minor feature requests or bug reports (e.g. once there was this minor UI issue where scrolling causes certain UI elements to jump around on iOS Safari, or something like that). At the very least I would get an acknowledgement that the issue has been filed with the appropriate team.

We must live in alternate universes, or somehow my nothing special personal account (free at first, $7/mo since 2015 or so) is prioritized, or somehow your account is deprioritized.

I have plenty of times actually. I've given lots of feedback for the github android app and gotten great responses. And most of the user reports (spam users etc) get dealt with within a couple days.

I reported a minor GraphQL API issue a while ago, which was actually my fault (incorrect usage), and received a helpful response within 30 minutes.

It’s pretty absurd for github to suggest that you should go through multiple steps to disable commands to log untrusted output. [1] Poor form to expect developers to understand and check for a new way that they need to sanitize their input rather than Github fixing it (possibly in a backwards incompatible way)

At a minimum they should provide a shell script (`show $XYZ`) and a js function that handles generating those tokens and enabling/disabling workflow commands for you.

[1] https://github.blog/changelog/2020-10-01-github-actions-depr...

IMO at the very least they should have an org-wide option in settings to disable command interpretation.

Yeah the ability to disable the insecure commands is crucial. That should be the first thing they should do.

Probably a lot of people aren’t even using this functionality in their workflows anyway.

I think the only one of these commands that I actually use is setting outputs, and I don’t even use that very often.

There should also be a way to see which commands have been triggered via stdout, so you can at least see what happened if something malicious happens.

Not really surprised tbh.

GitHub actions while very flexible are also prone to ask kinds of problems.

E.g. versions of actions on the market place are not pinned so people can republish different code under the same version. So you need to pin to a git commit hash which is much easier to get wrong and also easier to abuse for social engineering. And that's assuming you trust git commit hashes, if not you need to fork actions just to make sure the public code you reviewed doesn't get changed without new reviewes .

I've recently started using GitHub Actions and I was very disappointed to learn how few "native" actions there are. I don't want to introduce yet another avenue of third-party code via the GitHub Actions marketplace.

My GH workflows are a thin layer over the `run` action that runs shell scripts. Not only does my CI not have to rely on third-party actions, it's also extremely portable between CI providers, can be tested locally, and is infinitely more flexible.

(Apart from having run actions, the only other actions I use are the upload/download-artifact ones to share binaries between jobs, and the matrix feature for parallelizing runs.)

But to be clear, the kind of vulnerability that the P0 issue is talking about is not necessarily from malicious actions. As the issue itself says, the vuln also happens from untrusted input being processed by a trusted action, like issues and PRs.

This is what every build should be. If I can't build it locally and in CI using the same process, I consider it broken. I tend to use Makefile, but what your describing is what I consider best practice.

I did create a a couple of simple actions which abstract this a little, but I agree it is best when everything is in the repository and self-contained.

So I have a "run tests" action which just executes .github/run-tests, and it is up to the project-owner to write "make test" in that script, or whatever else they prefer.

It keeps things well-defined, and portable, but avoids the need to have project-specific workflows. I like being able to keep the same .yaml files in all my projects.


Similar thing for building and uploading artifacts, run ".github/build" and specify the glob-pattern for your artifacts:


This is something I found when looking at GitHub actions. I don't want to use tons of third party stuff and be vendor locked in. We already moved builds to build a Docker container when we used Travis for this exact reason before testing with CI.

I plan to do something similar soon ;=)

I played around with some actions wrt. commit signing and originally wrote a action a verify commit and tag signatures (in the way I need ;=) ). But now I plan to port it to a more general interface I can call from a GH action.

Yeah the fact that the untrusted inputs can come from PRs and Issues is worrying. I totally had not considered that.

I’m still a bit unclear how that could happen in practice thought. So the PR/Issue text would need to be written to stdout?

>I’m still a bit unclear how that could happen in practice thought. So the PR/Issue text would need to be written to stdout?

Yes. The issue title (the two actions the P0 issue talks about both do this) or the text, and same for PRs.

Right so that enables a potential hacker to set an env variable, which has to in a later step get picked up by another cli tool that uses the env variable in such a way that the contents of the env variable gets executed as code.

So interpreters like node/ruby/perl etc could execute the malicious stuff in the env variables. Are there any other cli tools that could do something dangerous?

Is your github workflow source available?

The moment I realized how many actions in GitHub required running untrusted code I went back to CircleCI.

By "native", do you mean first-party? Surely there are other third parties besides GitHub that you'd entrust with an API token.

> versions of actions on the market place are not pinned so people can republish different code under the same version

It's a feature, not a bug. When I publish my Actions, I publish them at `v1.2.3`, `v1.2`, and `v1`. Since Action authors using the `@actions/core` API had to update them on 1 Oct, users consuming a `@v1` release tag across their hundreds of repos/Workflows don't need to make any YAML changes at all to get the updated Actions.

It's IMHO a misguided way to archive that feature.

The way this provides semver like behaviour is IHMO just a hack, one which requires a bunch or additional work, too.

Instead when releasing a version to the marked place that code should be pinned/copied by github.

Then versions should be resolved by semver to get the effect you mentioned without needing to publish additional redundant tags (which are easy to get wrong).

Then you could just specify a action as `=1.2.3` to be sure to get the code you reviewed at any point in the future and if you trust the author you use e.g. `^1` to get any `1.x.y`.

Don't get me wrong the current way does work, is easier for github and sidesteps the whole "what counts as semver" discussion ;=)

Still it's IMHO suboptimal, it's really easy to e.g. forget to move one of the tags and it's also a problem for the "review and pin reviewed" version approach as instead of "=1.2.3" you now need to find to commit of that tag and use that, which also means in you workflow it's no longer obvious without comments which version you run with all the problems that implies.

Wow. How does this pass muster?

It's the simplest way for github to archive a system which seems to work well in general.

Then they throw it at the community and hope that any rough edges/problems are overshadowed by the hype around it and by cool thinks people do with it. Then they can slowly incrementally fix it with "live" feedback.

Ironically while this might seem bad long term it can potentially lead to a better end-product from both the view of github and the users.

The only problem is that you run at risk of ossification, i.e. you no longer being able to fix a problem because to many rely on it being that way.

Wow this looks quite serious. I hope that github will at least provide a way to turn these commands off entirely.

I hope they also fix the stupid authwalling of the execution logs that happens for public repos. The code is public, but the logs need an account? There is barely any security benefit in doing this for public repos, but it does make it harder for me to check PRs that I make to open source projects while not being logged in (e.g. from my phone).

At their scale I do believe there’s a benefit to authenticating to see logs: a lot of people scrape GitHub for secrets. CI logs are at high risk for user error, errors where a user unintentionally marks something as non-secret when it should’ve been secret. Putting these logs behind auth feels like an easy filter for some scraping.

Yeah, definitely. When I was at GitHub we were seeing secrets getting lifted from public pushes and tried within 7 seconds or so, if I recall correctly, and this was five years ago. This was a big reason why there’s a real scanning API now for service providers to be informed if a secret leaks.

By the time a human discovers their mistake it’s usually far too late.

That's a good point I guess. Per default, secrets are redacted by github in the logs, but some might slip through, e.g. if only a part of them is printed. Doesn't make me really happy though, I don't want to have to use an account :).

The code is public but the secrets are obviously not, and if they are ever put in the logs then that's a security issue. Even if the action isn't logging credentials directly, it may be logging privileged information (e.g. S3 bucket names or contents) that can only be accessed with privileged credentials. Therefore it makes sense to keep the logs private. Maybe there should be a setting for Action owners where they can set logs public if they think it would help contributors.

Interesting that Project Zero considers this “High” severity, and GitHub says “Moderate”.

Legitimate disagreement between security teams, or is GitHub applying spin and trying to downplay the severity, in light of there being no easy fix?

and the CVE got registered as "Low"


So what is it? High, Moderate, or Low? Super confusing

I’ve found that consistent CVSS scoring is practically impossible, both because many criteria are defined in ambiguous or confusing ways, and because rating any obvious knock-on effects is proscribed by the official scoring rules[0][1].

For example, if a software component has a vulnerability which allows an attacker to steal admin credentials by exfiltrating a password file, the fact that this would allow the attacker to then have admin access to do whatever they wanted in other software that shares credentials doesn’t matter when calculating the core score of the vulnerability. (As a user, you would need to calculate the “environmental score” in the calculator[2] to decide what your score is given whatever configuration you/your company uses.)

Additionally, if some software enables privilege escalation at the OS level by misusing APIs, this escalation is not a “scope change” unless the software tried to create a separate security authority for itself by having some form of sandbox or authentication, so a vulnerability of this kind in software that has no added security mechanisms at all will be scored lower than a vulnerability in software which has implemented some additional access control which was bypassed.

I’m not saying that the limits are nonsensical since without them you’d end up with a whole lot of Critical base scores, but in my limited experience the CVSS core score is pretty worthless when it comes to getting a sense of how bad a vulnerability actually is and I don’t know how security analysts cope with this system.

[0] https://www.first.org/cvss/v3.0/user-guide

[1] https://www.first.org/cvss/v3.0/examples

[2] https://www.first.org/cvss/calculator/3.0

If you look at how GitHub operates wrt. GitHub releases or GitHub actions it becomes clear (to me) that they have very different standards to certain security aspects then me or e.g. project zero.

So I would argue it's a case about having less strict standards wrt. Security vulnerabilities which requir your to pull in corrupted (or very careless) 3rd party code.

I’m not happy with the security on GitHub actions.

The current options I have for my orgs is to either allow all actions, allow actions hosted in the org, or whitelist specific actions by name.

It’s not clear to see actions and any testing or certification they may have. Even github built actions might be under an individual’s account. So it takes quite a bit of effort to check out an action to see if it’s safe or dangerous.

And I worry about actions a little more because they do things like copy files, deploy systems, and other things where I wouldn’t want rogue code interjecting malware.

Currently I require all actions to be hosted in the org so devs have forked actions into our org. That at least makes a developer vouch for a particular version and makes it harder for an update to that action to break something.

I think they need an equivalent of “gold actions” endorsed by GitHub. Or a vetted repo like pypi or cran or something else that actually has some controls around versions and changes.

I have the similar concerns as you. I still use actions, but I am very careful about reading through the code of any actions I use.

Your idea to use forked versions of actions is interesting.

Another security concern I have with GitHub is that when you give permission for a 3rd party app on your repo, you have to give access across all repos in your account.

I’m really surprised it’s not possible to give access to a specific repo. It’s bizarre to me that it’s setup like this.

GitHub’s security scopes are the closest that I’ve come to dropping them.

When I create a token, I have to grant access to all repos or none. That’s crazy since I admin some stuff any token I have for work means it’s a risk to everything.

Also, there’s no read-only scope for some admin functions. So to read private repo metadata for simple auditing purposes I have to grant access to edit them as well. That’s crazy.

Same for repo access. The fact that I can’t create a read only token for a repo is annoying.

I think this is a legacy from them not having a nuanced security model and it’s annoying.

The only around this now is to create different user accounts and that’s annoying.

Yeah the way token permissions are implemented totally baffles me every time. I never know if I’m creating a token for read or write or both.

I suggest you fork any actions you depend on.

Given that actions can reference other actions, is this even feasible? You'd have to go and fork all deps, and then if you ever want to pull upstream, repatch.

I don’t think that’s the case. As far as I have read, a multi-step action that you create cannot use a community action.

I was wanting to do this so as to be able to break up some lengthy workflows into reusable pieces, so I figured I’d create some actions, but when I tried to use for example the standard checkout action, inside my action, it failed. I emailed support and they said that actions inside actions was not supported.

Perhaps you are referring to something else? A different type of action?

I’m wrong. I naively assumed it worked a different way.

That’s what I do. So that is similar to me just scripting out admin actions myself. Not the end of the world, but hopefully will get better in time.

I created this ticket six months ago about the the inability to mask secrets that are step outputs (like from Terraform) and them subsequently leaking to logs, no action taken yet...

If anyone has influence to raise visibility of this (Twitter, GitHub, whatever) that would be appreciated :)


Running CI against arbitrary code is extremely difficult to do securely and I'm not sure anyone does more than a token effort. Test infrastructure is not designed to run against malicious code.

One of the examples dumps environment variables which may contain secrets by injecting code through a github action. What's stopping code inside a PR from sending the same info over email?

I believe CI on pull requests runs without the secrets, to avoid precisely that issue.

Yes, the problem is that GitHub did not seem to consider that “malicious input” can include any content that is provided and parsed in some way. Unfortunately, all of stdout is parsed, and often includes things like issue titles, descriptions, commit messages, etc.

Is anyone who’s used Actions surprised? They launched VERY unpolished and this sort of lack of care seems par for the course, tbh.

I am in the process of migrating a client’s CI to GA and have seen rough edges.

One example is there is no native way to serialize workflow execution. This means if you run two commit-based triggered actions in quick succession, you can have tests pass that should not. (They run against the wrong checked out code!)

I had this happen in front of the client, where it appeared a unit test of assertTrue(False) passed! It was so undermining of GA I had to chase down the exact status and prove ability to avoid (using Turnstyle) to the client. I wrote an email to the GA team specifically telling this story and expressing my concern.

Another is scheduled actions, where even if you specify a cron, the action has a window of several minutes when a GitHub runner will execute it, and there are cases where the window is missed and the action is not run!

This isn’t documented, though GitHub recently made the docs community-editable, possibly realizing there is too much to do.

All that said, the product is still relatively new, and even with these rough edges it is __really__ good.

Having CI definition alongside code, with free-ish VMs available at all times, (except when it goes down,) and results all inside GitHub is amazing.

Also, the self-hosted runners are amazing, as they allow some really simple matrix definition to run CI on custom HW and network environments.

If it weren’t for all the recent negative attention recently, I would have said GH was doing really well, inclusive of GitHub Actions.

Are you joking about the self hosted runners?

The fact that you can’t: 1) Apply any security policy for runners (e.g. require a label before running a PR) 2) Have runners quit after a single job so you can build ephemeral runners 3) Build your own runner against an open API

... means that self hosted runners are non starter for anything open source. It’s like they had to try hard to make the architecture that obtuse and closed. It’s unclear if it’s really poor/design or an active attempt to somehow drive the business.

The software I’ve been working with for self hosted runners is closed source.

I’ve had a (largely) good experience with them so far.

I've been overall disappointed with GH actions. Their YAML format lacks the ability to reuse common bits of configuration so you just need to copy/paste it all over. Our project has minitest tests, rspec tests, and another type of tests. All the setup/boiler plate needs to be copied to job section, and kept in sync.

Each test run depends on a huge number of 3rd party services being up. It pulls in docker images (DockerHub) and ubuntu packages (wherever the Ubuntu packages come from). It requires various azure services to be running (to use their cache and artifacts actions). It depends on github being up of course. Test runs fail multiple times per month because one of these services is down at the time the tests run.

Until recently their cache action was often slower than just rebuilding the assets from scratch without the cache.

The only compelling reason to switch to gh actions from any other CI service I've used is to reduce costs. you get what you pay for.

I agree that the YAML format can be a pain. Both because of the copy/paste stuff you mentioned but also just validating syntax can be time consuming.

There is an offline workflow tester I've tried but it is not close enough to the real thing to add that much value.

I've also struggled at times with how data moves around. There are docs and even example actions, but the overall state of education on doing cool stuff with GA is pretty weak.

I've also struggled with some of packages, and services--although some of the pain I've experienced is just software and service dependency management in general and that's a big challenge unrelated to GA.

Speed is a real consideration, though self-hosted runners can potentially solve that problem for you.

Rather than a get what you pay for take, I feel more like it is still early for the product. The visibility is high because it is Github, but it is still getting a lot of things worked out.

For me, GA is too conveniently integrated to Github to ignore.

I can confirm that the scheduled actions definitely don’t always run on time. Sometimes up to 10 mins late. According to support this happens because the system gets overloaded sometimes.

I had some content slip through into my main site a day early because the job that rebuilds the site at the end of the day was 10 mins late, and in the interim new content was added so when the job ran it got lumped in.

I will probably have to re-architect the whole workflow at some point but it’s good enough for now.

GH keeps rolling out more features for Actions well after the initial release, adding things like Organization wide Secrets. Overall I'm quite happy with the progress.

I've run about ten thousand builds now and find the biggest issue the mysterious cancelling of jobs (especially on Win/Macos host), crashing of jobs across the board during GH outages, and the intermittent long pauses (e.g. 15min) between jobs in a multi-job build when nothing appears to happen (scheduling delays). GH could have more clarity on the status board, since I'll see many jobs fail, and check the status page to see everything is green. :/

I remember thinking "yeah this isn't going to be safe..." when I saw how Github actions parsed those commands. It just seemed like a huge security vulnerability waiting to happen.

Glad to know my gut still sort-of works. Not sure who thought this was a good idea but I can think of a handful of other ways off the top of my head that this could have been implemented in a more secure way.

My employer recently decided to move from using self-hosted Jenkins servers living behind a VPN to GitHub actions. I wonder how much vulnerabilities like this were weighed against the reduced maintenance burden.

Octopus Deploy and TeamCity both parse output looking for commands in the same way (called Service Messages). I believe it is a common pattern in all CI servers.

The issue here doesn’t seem to be parsing the output, but what the parsed commands can do.

In Octopus there is no way to use them to run commands or set environment variables though. I can’t speak for TeamCity.

Update: TeamCity's documentation page about service messages:


The commands are quite specific though: test results, build numbers, etc. no generic commands like running commands or setting environment variables.

A small plug: Toast (https://github.com/stepchowfun/toast) has already introduced a patch for this vulnerability (CVE-2020-15228), so arbitrary user code running inside Toast will not be able to trigger this vulnerability when run via a GitHub Action. I highly recommend using Toast for your CI jobs (with or without GitHub Actions), not just for this security fix but also because it also allows you to run your CI jobs locally. It just runs everything inside Docker containers, so it works the same in CI, on your laptop, on your coworker's machine, etc.

As the runner process parses every line printed to STDOUT looking for workflow commands, every Github action that prints untrusted content as part of its execution is vulnerable.

What could possibly go wrong?

I’ve always worried about env leakage, especially in npm packages. GHA does a decent job of masking but there is no stopping arbitrary code. One thing they could do is pass log output against known GHA Secrets and run an additional masking on the output string.

To put this into context everything about this attack is logged in the open I believe. If that shows up in log output, you’ll know who did it and that everyone can see it - and proceed with rotating secrets promptly and report the hack to GitHub.

Hasn’t this already been addressed in this post (1st Oct)?

GitHub Actions: Deprecating set-env and add-path commands


GitHub has been nagging about needing to fix this for a week or two.

Just in case anyone is interested here is what the fix to rclone's build workflow looked like:


Love rclone, thanks for all your hard work!

Does anyone actually trust their CI for running PRs on a public repo (in repo or forks)? I was taught from my first forays into commercial software never to trust CI's until after a review (even if it's another coworker).

Wonder when the security industry comes up with better rating system for vulnerabilities.

The rating for this one is all over the place...

Every time I visit bugs.chromium.org in Firefox, something weird happens with the fonts and text selection. This doesn't happen in Chromium

Are they publicly disclosing/warning about a bug in FF, or is it just some other problem?

How do your font issue relate to a github security issue ?

No this is not related to that. I just posted a comment because many people will visit the site I mentioned, probably using Firefox.

I am able to reproduce this 'bug'. It's like mouse hijacking. Not able to select text on bugs.chromium.org when using Firefox.

Great post. Thanks for the info.

https://about.gitlab.com/releases/2020/11/02/security-releas... GitLab Security Release: 13.5.2, 13.4.5, and 13.3.9

I don't see anything to indicate this is related?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact