So it turns out there’s a lot of fear among the security community around using methodologies/concepts like DevOps. This fear is largely due to a number of myths about how organizations that are using a DevOps model are doing business, and what the implications are for the security of that organization. I’ll address one of those myths in today’s post, and some other myths in later posts.

Today’s myth is that by taking on the DevOps model of allowing developers to deploy code into production, you are violating separation of duties, which not only violates a variety of corporate policies but also has impact upon regulatory compliance requirements.  And that would be true, if allowing developers to deploy to production actually violated separation of duties – but it doesn’t.

Let’s take a step back first, and look at what the goals of separation of duties are when it comes to deploying software. Ostensibly, there are two main goals. The first is to ensure that no back doors are installed when code is updated. In theory, if someone else is deploying the code, they can independently validate that the code is secure and doesn’t have any back doors.  In practice, however, nobody actually does this. That person might run a variety of tests to ensure it doesn’t crash when deployed into production, but that’s not a separation of duties problem - and a properly tooled development team should be doing that testing anyway.

The second issue comes down to change management. If the developers are also deploying the code, there’s no audit trail and you don’t know what else they might change when they are logged in. Here’s where things get really interesting. When you hear about developers deploying code straight to production, it’s actually a lot more complicated than that. What isn’t generally being talked about is the fact that the developers aren’t logging directly into production, they are using a software deployment system that only has enough access to production to safely deploy the code. Furthermore, that deployment system is going to be tracking not only who made the changes, but also what changes were made, and when. Inevitably, the code has been through a rigid and automated system of testing as well, so it’s not like code is just being written and blindly posted live. The code changes are relatively small, so the chances of causing a problem are also quite low. But that’s the topic of my next post - why continuous deployment is actually a recipe for improving security…