… and you've installed this on Gitlab's servers? If not, the knowledge that people write malicious code is not new but the problem isn't running it using curl but not vetting the source and code before you run it. Focusing on the use of curl distracts attention from the part which matters.
I understand what you are saying, but respectfully disagree.
Conditioning people to use `curl | bash` trains them to not review anything and blindly run untrusted code. I have proven this across a very large group of people, including many that are supposed to be security minded. This technique works on nearly all organizations, as most companies and government agencies do not force outbound traffic through a mitm proxy. I have mixed feelings about those devices.
For what it's worth, where the code is running from doesn't really matter. The only advantage to pulling something from gitlab or github, is that I have to commit changes that anyone could see assuming I dont recreate the repo, which I can automate in their API. From any of my own VM's or servers, I can certainly make something look like a .txt that isn't and dynamically changes based on user-agent, remote addr, latency, ttl, etc.. I could even change the response if someone used curl with a fake user-agent, based on timing, but that is another blog post for another day.
Fair enough but I would ask whether this is really that much worse than, say, running "npm install" or "pip install" for a package whose author you don't implicitly trust. From my perspective, that time would be better spent educating developers and working to make their tools safer to use — for example, using aws-vault on a Mac means that that a drive-by script cannot harvest your AWS credentials (the key chain requires prompting per-binary and the user cannot bypass it).
Tools like pip, pear, gem, etc... are quite bad as well. Unless they are validating gpg checks of files or packages against a trusted source, then you could easily be installing a package from a mirror that has been compromised. In fact, this has happened to python repositories several times.
Even gpg checks when done the way Ubuntu and Redhat do it, is also bad. I see people install the gpg keys from the mirror all the time. If I pop the mirror, I can simply put my own gpg keys in the mirror and a percentage of people will happily install it.
GPG adds usability problems and doesn’t much help in the case where people have no idea whether the remote author they’ve never met is trustworthy. In most cases something like a Linux distribution is what you want where things are at least highly visible and a trusted third-party is looking at the update history.
Modern package managers do at least store hashes so your NPM, Python, Rust, etc. packages can depend on other packages with hashes in addition to just a version, which at least forces them to attempt to make the exploit covert enough that it can be deployed to everyone but there are many ways to make something subtly vulnerable. Ultimately, I really think this is coming back to securing the environment so a successful attack gets less. Apple has led the way on protecting things like passwords and mail from other processes running as the same user but it’d be really interesting to see how far you could get running your entire toolchain using the OS’ sandboxing.