This matters a lot, because otherwise CI/CD is a serious weak point in your overall security model. Especially if you will run PRs that folks send you -- ie, run sight-unseen 3rd-party code in a privileged container.
In terms of the OpenShift vs local development experience, we heard something very similar about Cloud Foundry too, so now there's `cf local`.
Essentially, Stephen Levine extracted buildpack staging into standalone container images (which now use Kaniko) along with a little CLI tool to help you make the local development experience more like the `cf push` one.
As it happens there is a lot of cooperation happening in this space right now -- folks from Google, Pivotal and Red Hat have been comparing notes about container building in the past few months. It's something we share a common interest in improving.
At Pivotal we have also begun closer cooperation with Heroku on buildpacks, which is really exciting. The buildpacks model lends itself to some really cool optimisations around image rebasing.
Folks heading to Kubecon this week who are interested in container builds should make sure to see the joint presentations by Ben Parees (Red Hat), Steve Speicher (Red Hat) and Matt Moore (Google).
Disclosure: I work for Pivotal.
There is so much container-related work coming out of Google right now that I am struggling to keep up.
Now I work for a much smaller company and they gave me full admin access to the AWS console from day one and I am "just" a senior developer.
They install bitcoin miners.
They install malware.
They install all sorts of shit.
It's unfortunate but it's true, I've seen it 100x.
We let our devs have local admin at my company but please don't misunderstand the serious risks involved in that decision. We take great measures to ensure that we can do that while still keeping our customers safe.
We would never allow admin AWS access, that's absurd. An attacker on your box would be able to own prod. Sorry that it's an inconvenience! When you manage to solve infosec, hit your ops/sec team up, they'll be happy to hand you the keys after that point.
Anyone should be able to provision new resources without harming production. Most of the time you don't see this, because management has still not gotten it through their thick heads that in order for the business to move fast, devs need the agency to implement solutions fast.
Cost-cutting of AWS leads to one (or several!) groups that are the gatekeepers to AWS, purportedly in order to make it more "efficient" by everyone doing everything one way, or having one group of experts handle it. But that's just one group of plumbers handling repairs for an entire city. Leaks spring up and nobody is there to fix them, and meanwhile, the developer is sitting there staring at the leak, going "will you please just let me duct tape this leak so I can get back to work!?"
I still ended up doing stuff suboptimally like using all of the stuff I would use on prem like Consul (instead of Parameter store),Fabio (instead of load balancers), Nomad (instead of ECS), VSTS (CI/CD - when I should have use the Codepipeline, Code Build, code deploy ) and even a local copy of Memcached and RabbitMQ just so I wouldn't have to go through all of the red tape just to provision resources so we could get our work done. Our projects ended up costing more because it took so long to get anything approved, I would overprovision "just in case".
Anytime that I wanted to stand up VMs for testing, I would send a Skype message to our local net ops person for an on prem VM and have it within an hour.
I was the "architect" for the company - I couldn't afford to make excuses and it's never helpful at my level of responsibility to blame management when things don't go right. I had to get it done.
So, I learned what I could, studied AWS services on a high level inside and out, got a certification and left for greener pastures.
And people wonder why I never do large organizations.
Separation of access is important and _required_. Developers don't need access to prod, admins maintaining the infrastructure don't need access to the directory, IDM doesn't need access to either QA or prod.
Developers do need full access in an environment to properly test - but that environment should be basically hermetically sealed from the rest of the companies infrastructure. So even if they do screw up the whole business won't be affected.
The games of phone tag and "try typing this" that happens during prod issues is a waste of everybody's time, and I fully believe that the people who write the code should be the ones with both the responsibility of the pagers and the ability to fix the code they've deployed. Everybody is happier, and the job gets done more quickly, when the job gets done by the people most qualified to do it (because they wrote it), and when they bear the consequences of writing bad code.
The environment needs to be set up to be forgiving of mistakes, yes, but that's easily done these days and should never result in loss of data if the infrastructure is properly automated. If giving production access means your developers can screw something up, then your admins can just as easily screw something up. Create environments that forgive these failures because they'll happen one way or another.
Removing root is not a trust issue - it’s a security surface area issue. You increase the number of audit points and attack options by at least an order of magnitude (1 admin : 10 devs).
In a small shop this might be acceptable, however in a large org it’s plain old insane.
If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
For one, I never said anything about root. I'm not sure anybody should have root in production, depending on the threat model. What I am saying is that the people who wrote the proprietary software being operated should be the ones on the hook for supporting it, and should be given the tools to do so, since they're the most aware of its quirks, design trade-offs, etc.
That means not just CI/CD and monitoring output, but machine access, network access, anything that would be necessary to diagnose and rapidly respond to incidents. That almost never requires root.
> If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
Or it might be an indicator that you are not relying on archaic and ineffective methods to protect your system.
We are talking about root in prod being granted and yet you seem to be intentionally misrepresenting this.
> Not getting root on your own machine as a developer?
was the origin of this thread, and there are tons of places where developers are not permitted root access to their own dev machines. We are not all talking about prod instances.
I have this conversation with my own counterparts in network / platform / infosec / application teams (I am an app dev), and in some cases the issue is conflated because dev environments are based on a copy of prod, and the compromise of such prod-esque data sources would be almost equally as catastrophic as an actual prod compromise.
If this is your environment, then don't be that guy and make it worse by changing the subject from dev to prod. Don't conflate the issue. Dev is not prod and it should not have a copy of sensitive prod data in it. If your environment won't permit you to have a (structural-only) copy of prod that you can use to do your development work unfettered, with full access, then you should complain about it, or tell your devs to complain if it affects them in their work and not such a big deal for yours.
Developers write factories, mocks, and stubs all the time to isolate tests from confounding variables such as a shared dev instance that is temporarily out of commission for some reason, and so they don't have to put prod data samples into their test cases, and in general for portability of the build. Then someone comes along and says "it would be too expensive to make a proper dev environment with realistic fake data in it, just give them a copy of Prod" and they're all stuck with it forever henceforth.
It's absolute madness, sure, but it's not misrepresented. This is a real problem for plenty of folks.
Top level comment in this thread:
Maybe I missed the part where this thread transitioned from dev to prod. I have no reason to misrepresent a stranger on the internet.
Engineers whose specific job it is to admin infra, yes.
Logging is great, but it's a 'we are owned, how do we get them out' measure - we want to avoid getting to that point.
edit: Of course, this all depends on your threat model, posture, maturity, etc. I'm just saying - we have lots of very good reasons to lock devs out.
Yes I have an AWS certification and on paper I am qualified to be an "AWS Architect". But I would be twiddling my thumbs all day with not enough work to do and would die a thousands deaths if I didn't do hands on coding.
And then add, "otherwise I'll leave for a company that gives me that access".
See how that sounds?
But as the team lead, I already had the final say into what code went into production and could do all kind of nefarious acts if I desired. Yes we had a CI/CD process in place with sign offs. But there was nothing stopping me from only doing certain actions based on which environment the program was running in.
I've seen what happens to people who are "just developers" that spend all their life working in large companies where they never learn anything about database administration, Dev ops, Net ops, or in the modern era - cloud administration. They aren't as highly valued as someone who really is full stack - from the browser all the way down to architecting the infrastructure.
Why wouldn't I choose a company if given that option that lets me increase my marketability, and gives me hands on experience in an enterprise environment instead of just being a "paper tiger" who has certifications but no experience at scale?
Not all developers are irresponsible, and not all ops people are impossible to work with. But both certainly exist.
But then they decided to "go the cloud" and instead of training their internal network ops people and having them work with the vendor who was creating the AWS infrastructure, the vendor took everything over and even our internal folks couldn't get anything done without layers of approvals.
So I ended up setting up my own AWS VPC at home, doing proof of concepts just so I could learn how to talk the talk, studied for the system administrator cert (even though I was a developer) and then got so frustrated it was easier to change my environment than to try to change my environment.
So now they are spending more money on AWS than they would have in their colo because no developer wants to go through the hassle of going through the red tape of trying to get AWS services and are just throwing things on EC2 instances.
In today's world, an EC2 instance for custom developed code is almost always sub optimal when you have things like AWS Lambda for serverless functions, Fargate for serverless Docker containers and dozens of other services that allows you to use AWS to do the "undifferentiated heavy lifting".
You can do that without root.
> They install malware.
What developers are you talking about? I want to know what developer would risk their career and prison time. And if a developer has no problem with going to prison, surely they have no problem finding some 0day privilege escalation exploit.
> It's unfortunate but it's true, I've seen it 100x.
That's not really an argument, how many developers did you have and how many of them risked prison time to install malware on dev machines?
> We take great measures to ensure that we can do that while still keeping our customers safe.
> We would never allow admin AWS access, that's absurd. An attacker on your box would be able to own prod.
You aren't doing a great job then, because your production stuff should be on a separate AWS account altogether.
I think of bitcoin mining attacks as pentesting with a contingent fee. Probably the cheapest and least destructive way to learn you have security weaknesses.
I even found this Blog Post this morning that went into some of it.
Lots of things are 'more full-featured' and none of them work well in an HPC context, where individual user jobs may need to be staged carefully.
After long discussions with the Singularity folks I've come to the conclusion that the only special features that Singularity has are:
* Their distribution format is a binary containing a loopback block device, allowing you to have "a single thing to distribute" without concern for having to extract archives (because in theory archive extraction can be lossy or buggy). The downside is it requires (real) root to mount or a setuid helper, because mounting loopback is privileged because it opens the door to very worrying root escalation exploits. When running without setuid helpers I'm not sure how they have worked around this problem, but it probably involves extraction of some sort (invalidating the point of having a more complicated loopback device over just a tar archive).
* It has integration with common tools that people use in HPC. I can't quite remember the names of those tools at the moment, but some folks I know wrote some simple wrappers around runc to make it work with those tools -- so it's just a matter of implementing wrappers and not anything more fundamental than that.
Aside from those two things, there's nothing particularly different about Singularity from any other low-level container runtime (aside from the way they market themselves).
A lot of people quote that Singularity lets you use your host user inside the container, but this is just a fairly simple trick where the /etc/passwd of your host is bind-mounted into the container and then everything "looks" like you're still on your host. People think this is a security feature, it isn't (in fact you could potentially argue it's slightly less secure than just running as an unprivileged user without providing a view into the host's configuration). If you really wanted this feature, you could implement it with runc's or LXC's hooks in a few minutes.
There's still lots of stuff left to do (like unprivileged bridge networking with TAP and similarly fun things like that).
I thought it was good practice to have strong separation between Dev and Production, and I'm pretty sure you're meant to create AWS keys+accounts with less-than-root access for day-to-day work.
With AWS databases - except for DynamoDB - you still use traditional user names/passwords most of the time. Those are stored in ParameterStore and encrypted with keys that not every service has access to. Of course key access is logged.
There is a difference between the root account and an administrator account.
Day to day work on the console is configuring resources.
Even if you do have strong separation -in our case separate VPCs, someone has to have access to administer it. We don't have a separate "network operations" department.
I certainly don't want run big, rapid-changing code as root. Not the countinous integration pipeline, not the build pipeline.
It's a misfeature of docker that it needs more privileges than a traditional chroot/fakeroot build (probably a good reason to build on those tools to build docker images, rather than rely on Dockerfile/docker build. Build images and reserve Dockerfiles for pulling in ready images and setting some parameters for configuration).
There is also the security benefit of there being no privileged codepath that can be exploited. So the only thing you need to worry about is kernel security (which, to be fair, has had issues in the past when it comes to user namespaces -- but you can restrict the damage with seccomp and similar tools).
Take EC2 as another example. If you have volume attach/detach permissions and EC2 start/stop permissions, you can stop an arbitrary instance, detach its root volume, reattach it to an instance of your choosing where you have login access, log in, add whatever you want to that volume (including rootkits, etc.), and reattach it back to its initial instance.
Giving someone AWS admin should be considered analogous to giving someone the keys to your racks in a datacenter. There really are many surprising ways that an AWS admin can take control of your infrastructure. Can you put countermeasures in place? Sure, but it's a huge attack surface
That way leads to the classic “works on my machine!” response to any bug report. Developers machines should be locked down to resemble Prod.
This is an exaggeration, but only slightly. :)
Security costs convenience. But people love to be too lax. And it's so fun to point it out and see the look on their faces, or pop up an XSS in their favorite stack of choice.
My best one was getting remote access on a server thanks to an unsanitized PDF filename. They were calling into the shell like `pdf-creator <company name.pdf>` (or whatever the utility was called). They were a B2B service, so they never thought to set anyone's company name to something like "; <reverse shell here> #"
I just thought it might be fun to contrast the two worlds. Those big, stodgy companies that we love to make fun of... Those guys were some of the hardest targets. I once spent a week trying to get anything on one, and just barely got an XSS. And I was lucky to find it.
If management understands that security costs time in feature development, fine. But with the role software development has these days in companies, if security and ops doesn't succeed in getting management on board, please don't hold the developers hostage! Work with them and try to find the least bad ways of working quickly enough. Many of them will support calls for better security practices as long as it doesn't imply more sleepless nights because goals haven't been changed, only the speed of which work can be done.
For any substantial deployment, I really don't want to have any access as a developer, but often I have to have root access to tons of machines simply to have any chance at actually doing my work.
Also, again it may just be me, but if you are running a hypervisor limited VM image on a stream of bits, and you can modify those bits outside that state, restricting this VM not to have root runtime is slightly odd.
This reads like a proscriptive "no root" rule has been metasploited into "we will wake you at 3am to check if your dream, you are running as root" type extremism.
Nothing stops anyone from writing code to interpret Dockerfiles or to directly fiddle with image layers. But taking the cost:value ratio proportional to everything else you need to be doing, it's probably a poor investment of time.
Google has economies of scale around this exact problem, which is why they've been pumping out work in this area -- Kaniko, Skaffold, image-rebase, FTL, rules_docker, Jib etc.
Thats a good story btw. I have people working near me who probably want the same thing from a lower driver, but nonetheless interest in non-root required builds.
There are many reasons why those restrictions exist, it's mainly related to what types of files you can create and how you could trivially exploit if the host if things like mknod(2) were allowed as unprivileged users. There's also some more subtle things like distributions having certain directories be "chmod 000" (which root can access because of CAP_DAC_OVERRIDE but ordinary users cannot, and you need to emulate CAP_DAC_OVERRIDE to make it work).
In short, yes you would think it's trivial (I definitely did when I implemented umoci's rootless support) but it's actually quite difficult in some places.
Also unprivileged FUSE is still not available in the upstream kernel, so you couldn't just write your own filesystem that generates the archives (and even if FUSE was unprivileged it would still be suboptimal over just being more clever about how you create the image).
With one command you can build your container in the cloud and have it stored in your registry. Either push the source code from local or have it use a remote git repository.
> $ az acr build --registry $ACR_NAME --image $IMAGE_NAME:$TAG_NAME --context .
For example, EnvKey makes it easy to only give a developer access to development/staging config so that if someone only needs to deal with code and not servers, they'll never see production-level config.
Could that get in the way sometimes? Sure. If for whatever reason someone who's normally a pure dev needs to step into ops for a bit, they'll have to ask for upgraded permissions to do so, which could certainly be seen as annoying, and could make someone feel less trusted than he or she would like to be. On the other hand, giving production secrets to every dev undeniably increases the surface area for all kinds of attacks, and I think that even small startups would be well-served by moving on from this as soon as they can.
I think the key distinction to make is between real security and security theater. As a developer, I'm willing to give up a little trust and a little efficiency if the argument for why I'm doing it seems valid, but if I'm being asked to jump through extra hoops without any clear benefit attached, I'll probably resent it. So for me, the most relevant question to ask the OP (or a company that wants to implement this) is what's the threat model? What exact attack scenarios is this protecting against? Are those realistic enough to justify the extra hoops?
1 - https://www.envkey.com
If you are part of the sysadmins that really need root, to manage iptables or system updates for instance, you would have root.
In general, I've found it's pretty workable to run without root/sudo for development work - there's not _that_ much stuff that you can't just install to ~/bin and run from there.
> First of all, only trusted users should be allowed to control your Docker daemon. This is a direct consequence of some powerful Docker features. Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you can start a container where the /host directory is the / directory on your host; and the container can alter your host filesystem without any restriction.
This does not seem like a necessary consequence of setting up a daemon for building disk images. What am I missing here? Is this an engineering oversight on the part of Docker or is there some technical reason that forces it to be like this?
It's not mandatory to use the docker group though. That's completely optional and you could definitely just 'sudo' whenever you do a 'docker' command. The docker _daemon_ needs to run as root because it needs to be able to do all kinds of privileged system calls to actually set up the containers. But if everyone who interacts with the docker daemon is a sudoer, that's not a problem.
Also, I have no admin access to uat or production envs or codebase. It's a challenge.
It should be noted though that by default LXC does require some privileged (suid) helpers (for networking and cgroups) -- though you can disable them as well. runc by default doesn't, though that's just a matter of defaults and what use-cases we were targeting.
Rootless builds work without kernel patches (the "rawproc" stuff mentioned in issues is not going to be merged into the kernel and there are other ways of fixing the issue -- like mounting an empty /proc into the container). I can do builds right now on my machine with orca-build.
The main reason it's for the faint of heart is that we don't really have nice wrappers around all of these technologies (runc works perfectly fine, as does umoci, as does orca-build, as does proot, as does ...). Jess's project is quite nice because it takes advantage of the cache stuff BuildKit has, though I personally prefer umoci's way of doing storage and unprivileged building (though I am quite biased ofc). I'm going to port orca-build to Go and probably will take some inspiration from "img" as to how to use Docker's internal Dockerfile parsing library.
For more information you can take a look at https://rootlesscontaine.rs/.
Unfortunately, for some obscure reason, it's only now that we might have to conform to the regulatory compliance issues (no one has been able to answer me "Why now? Why not when we first got bought years ago?") We're trying to fight it but the fix seems to be in.
 Enhanced caller ID. Part of our software lives on the cell phone. Part lives on the phone network.
I get the security risks and the fear of malicious insiders at a company such as this. But having expensive, fairly high level employees work around not having root access on their own machines strikes me as odd. The guy does docker for a living and can't run docker on his own machine.
Also, recall that he has physical access to said machine (I hope) so if he really was a malicious insider, he could already pretty much own it. Then again, maybe he doesn't.
Speaking as someone in a Fortune 500, sometimes the bureaucratic hoops one must jump through to develop in a VM or with docker on ones own machine isn't even worth it.
Umm, you can do all of those things with Linux... I can't remember the last time I had any issues with printing on Linux at all.
The management headaches for this kind of distributed computing is off the scale, and banning Linux and locking down senior devs workstations is just table stakes. Everyone is heading to a thin client running citrix to a data centre for pretty much this reason
Lets firs talk about removing root privileges on personal workstations or laptops. This is pointless. Anything that might be bad for root to do on a single user system is going to be just as bad running as a user. The second you allow any custom code to run as a user on any system you should treat it as potentially compromised -- adding root in to the mix really does not change things -- on a single user system. Worried about root user getting access to some customer data on the system? To bad, if the data was on the system it is more than likely the user level account (windows or linux for that matter) had access to the data therefor any intruder will also have access, just at the user level. The same goes for just about any other issue you can run in to. Am I suggesting running things as root? No, because there really is no need for most things -- at the same time if your developer needs to have root level access so they can test or work with technologies that require it when deployed to production then it really should be a non issue for sysadmins. The problem is sysadmins are mostly scared to be outed for doing nothing for most organizations these days. These sort of power sweeps are often used to justify big budgets and teams of people who tell you to "reboot" when it does not work right. There is also a bit if power hungry attitude associated with it too.
You state that Linux makes it harder, I can't see how and you did not show me anything convincing. Bold statements without any details into facts can just be tossed into the trash can as far as I am concerned.
Now lets talk about citrix. How does that help? All that does is move any real or perceived problem to a different system. If any of these VMs get accessed by bad actors they will still be able to own any of the information on them that the user had access too.
In any case I did not really come here to argue any of this, your comment is just sort of out of place with relation to what I said.
If you can't trust your employees don't hire them, or just pay them and tell them to sit in a dark room so they can't hurt anything.
The last poster has zero argument and is just ignoring the problem. Go setup a thousand printers for ten thousand employees in a hundred locations. They all have to work flawlessly and on all OS.
In the place I'm currently working at we have Windows behind Cisco VDI, we have to have Cntlm proxy running (all traffic goes through it).
Last week I got an email from Security team because I installed Decentraleyes addon for Firefox. Apparently, it's not allowed and is a security breach.