Enterprising thieves figured out that one of the things you could do was redefine functions, in particular the 'RemoveExtraSpaces' function that gets run against every chat line. If you set RemoveExtraSpaces=RunScript you've told WoW to execute every chat message displayed in your terminal.
Attackers just have to trick you into writing "/run RemoveExtraSpaces=RunScript" into your terminal, and after that all they had to do was chat code to you and it would execute. They basically get full control of anything lua can do in your game -- including sending messages to your friends asking them to also run the attack command.
So you would send people a message with a series of commands to try to exploit commonly used triggers in order to make people give you all their items.
What does a competing and not-wanting-to-infringe-a-patent modem company do? They just don't implement the delay, and ignore the direction of the traffic while they're at it. Problem solved! (head-to-desk, repeatedly)
I believe you're correct, although it's been ~25 years...
Which explains why the command above isn't exactly correct, IIRC. What you had to do was send a CTCP ECHO containing the "+++ATH0" to a channel -- or an individual user, but sending it to an entire channel full of people was a helluva lot more fun!
Anyways, I don't recall the exact command nowadays but it was
/CTCP #channel ECHO Goodbye +++ATH0
As long as the "victim's" IRC client wasn't ignoring those CTCP commands, it would "echo" the text and, as expected, command their modem to hang up.
Perhaps it shouldn't have worked... but I promise you that it absolutely did.
(Also, you could do the same thing by putting the "command" into the payload of an (ICMP) "ping" -- and that's a completely different thing from the "ping of death" that was discovered in that same era.)
I'm not sure the vulnerable modems would ignore the delay because of patents. Would love if you have any reference. I remember a Smart Link softmodem respected the delay, but an Intel softmodem did not. Always assumed it was an implementation bug on the part of the Intel one.
Of course you needed to have the nous to do this in the first place and a IRC server that showed users host names/ip adresses- I fear that I’m showing my age !
People were posting those crashwords in chats, on MUDs, or even as usernames. Some places where a lot of blind people hung out even contained specific filters for those.
There were also programs that didn't escape SAPI5 tags properly, so, by posting particularly clever bits of XML, you could change parameters like the pitch, speech rate, volume, or even the currently active voice, and make users, particularly younger users, scared of weird ghosts in their computers.
You'd get much more bang for the buck getting access to the whole account.
When I saw the announcement month ago I assumed that Github fully disabled the reading commands from stdout mechanism but looking at current docs it seems that they disabled only 5 most easily exploitable out of 14 commands which can be called using stdout. This seems like typical "we fixed your POC doesn't work anymore" but the main cause remains. As can be sometimes seen with hardware vendors "we fixed hard-coded backdoor password, now there is new hard-coded backdoor password".
Is the general consensus that `$GITHUB_ENV` offers less attack surface, despite not being scoped to a step?
Some guidance on this would be great from Github's side. Also appreciated: a way of turning off magic stdout strings in the YAML, and a way to set outputs via a more robust mechanism, and/or to scope which steps can modify which environmental variables. I get not wanting to break existing CI runs, but giving the people who want it an option to harden the CI proactively would be great. I don't really care if this would break some existing actions, either - seems fine to do for opt-in?
I’ve been using $GIT_ENV for setting environment variables, and though I read about the stdout commands I never got to using them.
It’s one of those “but of course” moments realising that any cli tool can trigger commands with it’s stdout.
As you mentioned setting outputs is rather crucial, I guess using stdout is the only way todo that.
There are definitely a lot of security things to consider when using actions. I read the code of any community actions I use but, especially after reading this alert, I feel like there are a lot of places where malicious code could cause damage.
they will patch a PoC, but how will we know the root cause is patched? the service parameters can be fuzzed by someone to check for injection (if they can).
I don't know what the root cause is.
Tools that you run in actions can trigger the stdout commands (setting env variables and a few others) via printing to the console.
To do some damage though they need a few other things to happen. The example given is a node process that is run after the attacker has set an env variable with arbitrary code, which node reads and executes.
Additionally since some actions print the contents of PRs and Issues to the console, these can effectively do the same thing. So the attacker doesn’t need to control a cli tool stdout output, they just need to
create a PR/Issue with the malicious code. If an automated workflow writes the PR/Issue to stdout, then the malicious code in the PR/Issue is executed.
Since it’s possible for any action to run commands by writing to stdout, you have to trust that all the actions you use don’t print malicious stuff to stdout, and they themselves might not be malicious, but they might print stupid stuff that contains untrusted input like the contents of PRs/Issues.
It’s not made any easier by the fact that the workflows are in the repo for the attacker to read and study, so they can see exactly what will happen before their attack.
This whole method of triggering commands via stdout is very dodgy, and there is no way to either turn it off, or narrow the scope of what is executed, and it’s not easy to see what happened after the fact either.
At the minute you just have to be very diligent at checking that your actions aren’t printing stupid stuff to stdout that could cause execution of malicious code later in your workflow.
If it’s only the setting of env variables that causes issues, then they might be able to just turn that off because there is another safer mechanism (writing to $GITHUB_ENV) to set env variables.
There might very well be other attack surfaces though that I haven’t thought of.
The issue is in the handling of the connection.
This is the issue i'm having right now with [the] wrapping [of] secure binary streams of [un]trusted data with authentic transport control.
eg tls 1_2 over ssl
"Text as a cross-application data transfer format" and the Unix Philosophy of "chains of small programs that each do a single thing well" are what lead to exploits like these.
I don't know what the proper system design is, but it isn't these two things. Disagree all you like.
Quoting from the post:
2020-07-21 Report sent to Github and triaged by Github security team. Disclosure date is set for 2020-10-18.
2020-08-13 Project Zero requests a status update.
2020-08-21 Github say that they are are still working on a fix and a deprecation plan
2020-10-01 Github issued an advisory deprecating the vulnerable commands, assigned CVE-2020-15228, and asked users to patch and update their workflows.
2020-10-12 Project Zero reach out and proactively mentions that a grace period is available if Github wants more time to disable the vulnerable commands.
2020-10-16 Github requested the additional 14 day grace period, with the hope of disabling the vulnerable commands after 2020-10-19.
2020-10-16 Project Zero grants grace period, new disclosure date is 2020-11-02.
2020-10-28 Project Zero reaches out, noting the deadline expires next week. No response is received.
2020-10-30 Due to no response and the deadline closing in, Project Zero reaches out to other informal Github contacts. The response is that the issue is considered fixed and that we are clear to go public on 2020-11-02 as planned.
2020-11-01 Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.
2020-11-02 Project Zero responds that there is no option to further extend the deadline as this is day 104 (90 days + 14 day grace extension) and that the disclosure will be today.
Edit: For reference, this was announced on the GitHub blog a month ago: https://github.blog/changelog/2020-10-01-github-actions-depr...
Why do you think it won't make a difference to them in light of them specifically requesting 48 more hours?
> Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.
The issue isn't really with GitHub not making the fix available, but the issue is that if they turn it off now they will break legions of builds. I think the Project Zero disclosure issue for them then is more for pointy haired manager appearances ("Oh no! Github has a disclosed Project Zero vuln!") than for any real difference in security.
> 2020-11-01 Github responds and mentions that they won't be disabling the vulnerable commands by 2020-11-02. They request an additional 48 hours, not to fix the issue, but to notify customers and determine a "hard date" at some point in the future.
Suggests to me that GitHub has no clue whether it's resolved or not. Project Zero were following GitHub's lead. Asked them if the problem was resolved or not, didn't get anything, had to reach out to internal contacts, and then got a conflicting message saying "NO WAIT!"
Regardless, GH had already disclosed this a month before, so their desire for 2 more days was unnecessary: the cat was already out of the bag, through their own actions.
While it is true that if an entire organization wants something done that it will happen, it is not true that inaction by an organization always signifies unwillingness or inability to act.
It's easy to sit on the sidelines and throw tomatoes at the parties involved. It is another matter entirely when you are hamstrung by internal process or a specific power-tripping and ignorant person in management.
The thing is, major releases and public shaming is pretty much the only thing that has a (not guaranteed) chance of moving these processes or stubborn people. Before that they learn to just delay indefinately.
Repeated shaming of this nature should weed those people out, but it won't always have that effect.
Preventing motivations of a single person inside an organization from tanking its products and direction is one of the main jobs for organizational management. Good companies are resistant to fiefdom building and having managers damage their reputation for personal gain.
It is entirely the corp/org's fault, and yes, that reputation can rub off on you.
But in what sense are these kinds of internal politics NOT "unwillingness or inability", at least at the organizational level?
Organizations where managers are allowed to play political games at the expense of users' security are not to be trusted with any sensitive data, and need to be publicly called out and shamed if there's any hope to effect change.
An example from my past should demonstrate...
A single manager in my past simultaneously:
A) told higher management that something was not possible, and
B) told the org under him that management didn't want that same thing done, and
C) convinced his peers to let him manage this personal pet project alone.
The only person that knew the truth at the time was the lone person who didn't want it done. Result: it didn't happen.
This is extraordinarily common, in my experience, though only this particular person could do it this well.
The entire organization wanted something done except one guy who convinced everyone to let him have the power to stop it without anyone understanding what they were letting him do.
You may argue that this is an organization being unwilling to act via its own structure, and I would counter that this person was very skilled at this type of deception, and succeeded at this type of thing despite the organization above, below, and around him all being pretty much standard.
That's their fault too.
It's also the fault of other employees, who did not speak up. You're painting a picture of everyone being incapable of judging things on their own, because someone talked them out of what they knew to be correct.
Who's fault is that?
What's the viable alternative?
That still sounds like unwillingness or inability to me.
A single human may be absolutely willing to act, but the organization won't - i.e. they will not be allowed to act by their bosses. Everything else is nitpicking and blame shifting. If someone's willing to do the job "but they have too many other things to do" then they're not willing to do the job. If someone's boss or with authority over them is unwilling to prioritize the work then they're unwilling to do the work as well.
The software of their competitors is used by billions of people, thus finding and reporting vulns in that software is going to help billions of people by fixing that problem.
It's like Toyota buying a Ford and finding out that if you rear end it, it will explode. So Toyota publishes their finding. Then weeks later a 3rd party finds the prius accelerates by itself.
Maybe if Toyota wasn't so busy trying to screw Ford they would have known about their own glaring issues.
We can't tell whether or not Google lives in a glass house because they keep it proprietary. Therefore we need to assume it's no better than what they bully others for. Nobody handed Google a crown and made them ruler of everyone's code. They just kinda built themselves a castle and started cutting off heads.
I think that's usually people that just vastly prioritize the interests of the vulnerable host compared to the interests of all the users.
To my eyes it's obvious there's a trade-off to be made between protecting the company and protecting the people affected. Too little time is detrimental to one party (sometimes both to some degree), while too much time is definitely detrimental to the other. It's a fine line to walk, but the best thing you can do is be consistent. Nobody is served well if the companies/projects in question work under the assumption that more time will be given because it has to others in the past, and then it isn't, as the company may not have correctly prioritized the fix, and then the public is left vulnerable as well. But to not be consistent leads to abuses of the system where things just don't get fixed.
At this point, I don't think anyone can accuse Project Zero of being inconsistent, and 90 days is a long time to get something fixed if you put the resources towards it that it needs. I have little sympathy for a company that mismanages this process at this point. For an open source project, there's always groups and lists you can go to and ask for help if it seems overwhelming for the project you have. Presumably if it's an important enough project some person or company that cares about it will donate some time. If nobody is willing then my guess is that the project's not that important to people.
There's also the possibility that the problem is so large or so fundamental that fixing it is a herculean task. Maybe in that case people are better off moving to an alternative if they care about the problem. Sometimes things are so bad the best choice is just to jump ship. It sucks for that company or project, but they have no right to my usage, but I do have a right to choose what I want.
We must live in alternate universes, or somehow my nothing special personal account (free at first, $7/mo since 2015 or so) is prioritized, or somehow your account is deprioritized.
At a minimum they should provide a shell script (`show $XYZ`) and a js function that handles generating those tokens and enabling/disabling workflow commands for you.
Probably a lot of people aren’t even using this functionality in their workflows anyway.
I think the only one of these commands that I actually use is setting outputs, and I don’t even use that very often.
There should also be a way to see which commands have been triggered via stdout, so you can at least see what happened if something malicious happens.
GitHub actions while very flexible are also prone to ask kinds of problems.
E.g. versions of actions on the market place are not pinned so people can republish different code under the same version. So you need to pin to a git commit hash which is much easier to get wrong and also easier to abuse for social engineering. And that's assuming you trust git commit hashes, if not you need to fork actions just to make sure the public code you reviewed doesn't get changed without new reviewes .
(Apart from having run actions, the only other actions I use are the upload/download-artifact ones to share binaries between jobs, and the matrix feature for parallelizing runs.)
But to be clear, the kind of vulnerability that the P0 issue is talking about is not necessarily from malicious actions. As the issue itself says, the vuln also happens from untrusted input being processed by a trusted action, like issues and PRs.
So I have a "run tests" action which just executes .github/run-tests, and it is up to the project-owner to write "make test" in that script, or whatever else they prefer.
It keeps things well-defined, and portable, but avoids the need to have project-specific workflows. I like being able to keep the same .yaml files in all my projects.
Similar thing for building and uploading artifacts, run ".github/build" and specify the glob-pattern for your artifacts:
I played around with some actions wrt. commit signing and originally wrote a action a verify commit and tag signatures (in the way I need ;=) ). But now I plan to port it to a more general interface I can call from a GH action.
I’m still a bit unclear how that could happen in practice thought. So the PR/Issue text would need to be written to stdout?
Yes. The issue title (the two actions the P0 issue talks about both do this) or the text, and same for PRs.
So interpreters like node/ruby/perl etc could execute the malicious stuff in the env variables. Are there any other cli tools that could do something dangerous?
It's a feature, not a bug. When I publish my Actions, I publish them at `v1.2.3`, `v1.2`, and `v1`. Since Action authors using the `@actions/core` API had to update them on 1 Oct, users consuming a `@v1` release tag across their hundreds of repos/Workflows don't need to make any YAML changes at all to get the updated Actions.
The way this provides semver like behaviour is IHMO just a hack, one which requires a bunch or additional work, too.
Instead when releasing a version to the marked place that code should be pinned/copied by github.
Then versions should be resolved by semver to get the effect you mentioned without needing to publish additional redundant tags (which are easy to get wrong).
Then you could just specify a action as `=1.2.3` to be sure to get the code you reviewed at any point in the future and if you trust the author you use e.g. `^1` to get any `1.x.y`.
Don't get me wrong the current way does work, is easier for github and sidesteps the whole "what counts as semver" discussion ;=)
Still it's IMHO suboptimal, it's really easy to e.g. forget to move one of the tags and it's also a problem for the "review and pin reviewed" version approach as instead of "=1.2.3" you now need to find to commit of that tag and use that, which also means in you workflow it's no longer obvious without comments which version you run with all the problems that implies.
Then they throw it at the community and hope that any rough edges/problems are overshadowed by the hype around it and by cool thinks people do with it. Then they can slowly incrementally fix it with "live" feedback.
Ironically while this might seem bad long term it can potentially lead to a better end-product from both the view of github and the users.
The only problem is that you run at risk of ossification, i.e. you no longer being able to fix a problem because to many rely on it being that way.
I hope they also fix the stupid authwalling of the execution logs that happens for public repos. The code is public, but the logs need an account? There is barely any security benefit in doing this for public repos, but it does make it harder for me to check PRs that I make to open source projects while not being logged in (e.g. from my phone).
By the time a human discovers their mistake it’s usually far too late.
Legitimate disagreement between security teams, or is GitHub applying spin and trying to downplay the severity, in light of there being no easy fix?
So what is it? High, Moderate, or Low? Super confusing
For example, if a software component has a vulnerability which allows an attacker to steal admin credentials by exfiltrating a password file, the fact that this would allow the attacker to then have admin access to do whatever they wanted in other software that shares credentials doesn’t matter when calculating the core score of the vulnerability. (As a user, you would need to calculate the “environmental score” in the calculator to decide what your score is given whatever configuration you/your company uses.)
Additionally, if some software enables privilege escalation at the OS level by misusing APIs, this escalation is not a “scope change” unless the software tried to create a separate security authority for itself by having some form of sandbox or authentication, so a vulnerability of this kind in software that has no added security mechanisms at all will be scored lower than a vulnerability in software which has implemented some additional access control which was bypassed.
I’m not saying that the limits are nonsensical since without them you’d end up with a whole lot of Critical base scores, but in my limited experience the CVSS core score is pretty worthless when it comes to getting a sense of how bad a vulnerability actually is and I don’t know how security analysts cope with this system.
So I would argue it's a case about having less strict standards wrt. Security vulnerabilities which requir your to pull in corrupted (or very careless) 3rd party code.
The current options I have for my orgs is to either allow all actions, allow actions hosted in the org, or whitelist specific actions by name.
It’s not clear to see actions and any testing or certification they may have. Even github built actions might be under an individual’s account. So it takes quite a bit of effort to check out an action to see if it’s safe or dangerous.
And I worry about actions a little more because they do things like copy files, deploy systems, and other things where I wouldn’t want rogue code interjecting malware.
Currently I require all actions to be hosted in the org so devs have forked actions into our org. That at least makes a developer vouch for a particular version and makes it harder for an update to that action to break something.
I think they need an equivalent of “gold actions” endorsed by GitHub. Or a vetted repo like pypi or cran or something else that actually has some controls around versions and changes.
Your idea to use forked versions of actions is interesting.
Another security concern I have with GitHub is that when you give permission for a 3rd party app on your repo, you have to give access across all repos in your account.
I’m really surprised it’s not possible to give access to a specific repo. It’s bizarre to me that it’s setup like this.
When I create a token, I have to grant access to all repos or none. That’s crazy since I admin some stuff any token I have for work means it’s a risk to everything.
Also, there’s no read-only scope for some admin functions. So to read private repo metadata for simple auditing purposes I have to grant access to edit them as well. That’s crazy.
Same for repo access. The fact that I can’t create a read only token for a repo is annoying.
I think this is a legacy from them not having a nuanced security model and it’s annoying.
The only around this now is to create different user accounts and that’s annoying.
I was wanting to do this so as to be able to break up some lengthy workflows into reusable pieces, so I figured I’d create some actions, but when I tried to use for example the standard checkout action, inside my action, it failed. I emailed support and they said that actions inside actions was not supported.
Perhaps you are referring to something else? A different type of action?
If anyone has influence to raise visibility of this (Twitter, GitHub, whatever) that would be appreciated :)
One of the examples dumps environment variables which may contain secrets by injecting code through a github action. What's stopping code inside a PR from sending the same info over email?
One example is there is no native way to serialize workflow execution. This means if you run two commit-based triggered actions in quick succession, you can have tests pass that should not. (They run against the wrong checked out code!)
I had this happen in front of the client, where it appeared a unit test of assertTrue(False) passed! It was so undermining of GA I had to chase down the exact status and prove ability to avoid (using Turnstyle) to the client. I wrote an email to the GA team specifically telling this story and expressing my concern.
Another is scheduled actions, where even if you specify a cron, the action has a window of several minutes when a GitHub runner will execute it, and there are cases where the window is missed and the action is not run!
This isn’t documented, though GitHub recently made the docs community-editable, possibly realizing there is too much to do.
All that said, the product is still relatively new, and even with these rough edges it is __really__ good.
Having CI definition alongside code, with free-ish VMs available at all times, (except when it goes down,) and results all inside GitHub is amazing.
Also, the self-hosted runners are amazing, as they allow some really simple matrix definition to run CI on custom HW and network environments.
If it weren’t for all the recent negative attention recently, I would have said GH was doing really well, inclusive of GitHub Actions.
The fact that you can’t:
1) Apply any security policy for runners (e.g. require a label before running a PR)
2) Have runners quit after a single job so you can build ephemeral runners
3) Build your own runner against an open API
... means that self hosted runners are non starter for anything open source. It’s like they had to try hard to make the architecture that obtuse and closed. It’s unclear if it’s really poor/design or an active attempt to somehow drive the business.
I’ve had a (largely) good experience with them so far.
Each test run depends on a huge number of 3rd party services being up. It pulls in docker images (DockerHub) and ubuntu packages (wherever the Ubuntu packages come from). It requires various azure services to be running (to use their cache and artifacts actions). It depends on github being up of course. Test runs fail multiple times per month because one of these services is down at the time the tests run.
Until recently their cache action was often slower than just rebuilding the assets from scratch without the cache.
The only compelling reason to switch to gh actions from any other CI service I've used is to reduce costs. you get what you pay for.
There is an offline workflow tester I've tried but it is not close enough to the real thing to add that much value.
I've also struggled at times with how data moves around. There are docs and even example actions, but the overall state of education on doing cool stuff with GA is pretty weak.
I've also struggled with some of packages, and services--although some of the pain I've experienced is just software and service dependency management in general and that's a big challenge unrelated to GA.
Speed is a real consideration, though self-hosted runners can potentially solve that problem for you.
Rather than a get what you pay for take, I feel more like it is still early for the product. The visibility is high because it is Github, but it is still getting a lot of things worked out.
For me, GA is too conveniently integrated to Github to ignore.
I had some content slip through into my main site a day early because the job that rebuilds the site at the end of the day was 10 mins late, and in the interim new content was added so when the job ran it got lumped in.
I will probably have to re-architect the whole workflow at some point but it’s good enough for now.
I've run about ten thousand builds now and find the biggest issue the mysterious cancelling of jobs (especially on Win/Macos host), crashing of jobs across the board during GH outages, and the intermittent long pauses (e.g. 15min) between jobs in a multi-job build when nothing appears to happen (scheduling delays). GH could have more clarity on the status board, since I'll see many jobs fail, and check the status page to see everything is green. :/
Glad to know my gut still sort-of works. Not sure who thought this was a good idea but I can think of a handful of other ways off the top of my head that this could have been implemented in a more secure way.
The issue here doesn’t seem to be parsing the output, but what the parsed commands can do.
In Octopus there is no way to use them to run commands or set environment variables though. I can’t speak for TeamCity.
Update: TeamCity's documentation page about service messages:
The commands are quite specific though: test results, build numbers, etc. no generic commands like running commands or setting environment variables.
What could possibly go wrong?
To put this into context everything about this attack is logged in the open I believe. If that shows up in log output, you’ll know who did it and that everyone can see it - and proceed with rotating secrets promptly and report the hack to GitHub.
GitHub Actions: Deprecating set-env and add-path commands
Just in case anyone is interested here is what the fix to rclone's build workflow looked like:
The rating for this one is all over the place...
Are they publicly disclosing/warning about a bug in FF, or is it just some other problem?
I am able to reproduce this 'bug'. It's like mouse hijacking. Not able to select text on bugs.chromium.org when using Firefox.