62

I have an enterprise application running that uses both MySQL and MongoDB datastores. My development team all have SSH access to the machine in order to perform application releases, maintenance, etc.

I recently raised a risk in the business when users started storing highly sensitive data on the application that the developers have indirect access to this data which caused a bit of a storm, so I have now been mandated with securing the data so that it is not accessible.

To me this does not seem possible because if the application has access to the database then a developer with access to the machine and application source will always be able to access the data.

bjb568
  • 101
  • 5
Clinton Bosch
  • 754
  • 6
  • 9
  • 30
    Users should be storing sensitive data only in encrypted form. There shouldn't be a huge problem if developers can access the encrypted form, as long as the matching keys are properly shielded from them. – MSalters Dec 04 '14 at 13:13
  • To back up the comment by MSalters, keeping a production stage environment with production data snapshot is a good way for developers to get the benefits of playing in production without seeing anything sensitive. You just mask or de-identify the confidential and sensitive data from them. – maple_shaft Dec 04 '14 at 15:07
  • 3
    @Clinton Do you have separate admin and developer teams? The server admin can always read the data and encryption doesn't help since they can easily get the key. – CodesInChaos Dec 04 '14 at 15:09
  • 14
    To be completely honest, this is a complicated matter and doing it right requires a lot of expertise in data security. Even if you knew exactly what to do, you will face business opposition, political and technical roadblocks. I highly suggest you bring in a data security consultant. They not only know what to do here, upper management typically gives more credence to when a third party is telling them to change. Upper management generally doesn't put as much stock in what their internal experts are telling them. – maple_shaft Dec 04 '14 at 15:11
  • 3
    Might be worth asking on Information Security Stack Exchange. There's some related info on [this question](http://security.stackexchange.com/questions/66687/juggling-bus-factor-separation-of-duties-on-a-small-team/66701#66701) – paj28 Dec 04 '14 at 15:41
  • 23
    Why are humans touching the server and deploying code? – Wyatt Barnett Dec 04 '14 at 15:58
  • Is this data that you require or that the users decided to put? In the latter case you should really tell the users "Wait, we don't encrypt your data, so avoid putting sensitive data here or encrypt it by yourself", in the first case you are probably violating some laws by not signing an agreement with the users. Depending on the kind of data different minimal security measures can be required. – Bakuriu Dec 04 '14 at 20:11
  • 2
    Depending on what the sensitive information is, external means may assist in protecting it. For example, HIPPA is a big reason for me to not do anything naughty with the information I have access to. (Also, keeping my job.) – Brian S Dec 04 '14 at 21:11
  • 2
    Is this [PCI](https://www.pcisecuritystandards.org/security_standards/) sensitive? Banking sensitive? Healthcare sensitive? Law enforcement sensitive? Check to see if there are guidelines (example: [PCI DSS](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf)) that can help you set the proper policies and procedures to protect the data. –  Dec 05 '14 at 03:56
  • 1
    How sensitive is your data? If they are not too sensitive personal data (name, address), you can probably have your developers/sysadmins sign a non-disclosure agreement. There is no such thing as perfect security. There is always adequate security though. – sampathsris Dec 05 '14 at 07:07
  • The standard solution is to have two systems. The developers develop on one system, which has "dummy" data on it. The production system, with "real" data on it, is only accessible to the production system administrators (and the users, of course). But that, of course, is an oversimplification. – Daniel R Hicks Dec 05 '14 at 20:42
  • @WyattBarnett Automation is hard. And that doesn't solve this problem, anyway. Even if you automate everything, someone has to provide the keys/passwords/whatever to access the production machines, meaning someone has access to those machines. It could just as easily be developers. (To be honest I'm not entirely clear on how you normally go about providing those credentials to an automated push system, other than just leaving them lying around on the box that does the push. The alternative is human involvement, one way or another.) – jpmc26 Dec 06 '14 at 04:22
  • The data is not PCI sensitive, but it is an HR system so it contains performance review scores and may also contains sensitive binary documents like CV's which may contain salary expectations etc – Clinton Bosch Dec 06 '14 at 16:28
  • Again, data encryption does not really help me since if a developer has access to the source code and the data then it is trivial to decrypt – Clinton Bosch Dec 06 '14 at 16:35
  • Curious why you didn't ask on [security.se]? Seems to be much more of a security question than a programmer one.... – AviD Dec 06 '14 at 20:45
  • @jpmc26 -- For secrets you can either have someone make one visit to the host machines to store the secrets in specified ways or leverage various deployment tools that have concepts of secured data stores. – Wyatt Barnett Dec 07 '14 at 01:43
  • Be careful about the idea of having data encrypted and the key only known to the user. You have to evaluate the probability and consequences of your users losing their key (hence their data on the server) and balance it with the possibility of having a undiscrete developer in your team. – Tony Dec 07 '14 at 09:03
  • 1
    "users started storing highly sensitive data" .. this sounds like it was unplanned ... ? – Michael M Dec 07 '14 at 10:17
  • Depending on the nature of the data, I would not accept responsibility for securing the data from within the company without first consulting an attorney and then adding a rider to my employment contract. While generally protected from liability as an employee, in this situation you may be exposing yourself to special legal situations such as [HIPAA](https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act). – Basil Bourque Dec 08 '14 at 07:00

8 Answers8

89

Security is not a magic wand you can wave at the end of a project, it needs to be considered and built in from day 1. It is not a bolt-on, it is the consistent application of a range of solutions applied iteratively and reviewed regularly as part of a whole system, which is only as strong as the weakest link.

As it stands you have flagged a security concern which is a good first step. Now as a minimum you have to define:-

  • What data you are trying to protect?
  • Who are you trying to protect that data from?
  • Who actually needs access to what (and when)?
  • What is the legal/financial/business impact of that data being compromised?
  • What is the legal/financial/business need for a person/group having access to the data?
  • What budget is the business willing to assign to a "get secure, stay secure" strategy when it was not a business requirement previously?
  • What access does the system need to the data?
  • What does this process and systems this application rely on?
  • What is done to secure those environments?
  • Who is going to be responsible for implementing it and reviewing the whole process?

Until you know all those in detail you really don't have anything to work with. That information will define what mitigations to those threats you can (and cannot) apply and why.

It may be that the best thing to do is recognise that you don't have the necessary experience and that it would be best to bring in someone new with that experience. I quite often hear the response that there's no budget - if it is considered genuinely important then the budget will be found.

James Snell
  • 3,168
  • 14
  • 17
  • 33
    Whoa...that makes security sound...non-trivial. (Sorry for the sarcasm; I've seen a lot of people surprised by this.) – Paul Draper Dec 05 '14 at 05:46
  • 4
    I do believe that a number of people think there is a magic `make-application-secure` command they just need to run. – TMH Dec 08 '14 at 11:13
27

You are right. If an application is capable of accessing content stored on corporate machines without the user passing extra information every time, then the programmers, maintainers, sysadmins etc. of the service provider can access that content. This is fundamentally unavoidable, and a major source of insecurity (Edward Snowden was a sysadmin, and had special privileges above "Top Secret" because there simply isn't a way of not granting them.)

The only way of avoiding this is to require the user to provide information that never makes it to the corporate machines. This is tedious and error-prone, since no one can possibly remember enough information to form a secure access barrier, and therefore people will immediately begin to store their credentials electronically in some place, which will then become the new attack target (and probably a weaker target than the one you are maintaining). But if you want to truthfully assert "Our employees are not physically capable of accessing our users' content", it's the only way.

(Note also that such a business model seems to be becoming politically untenable. Businesses have been forced out of operation by security services for trying to do exactly this. I understand that giving privacy guarantees has business value, but it can't possibly have more business value than the fundamental goal of staying in business.)

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • 6
    It is possible to design hardware in such a way that it is physically impossible to access certain data without creating a permanent record of such access which could not be destroyed without the collaboration of multiple independent people, and which *even with such collaboration* could not be destroyed without leaving clear evidence of deliberate destruction. Updating such systems to handle changing requirements, however, is apt to be very expensive compared with updating software-based security systems. – supercat Dec 04 '14 at 17:29
  • 5
    You're right, I completely forgot to mention *auditability* as a possible alternative to zero-knowledge hosting. It's somewhat easier to achieve and often enough for the business case. – Kilian Foth Dec 04 '14 at 18:26
  • Your last paragraph. Are you referring to LavaBit-type stories? I'm confused. – jpmc26 Dec 06 '14 at 04:13
  • 1
    @supercat You also have to trust that the creators of the hardware made it do what they said it does. – user253751 Dec 08 '14 at 02:37
  • 2
    @immibis: True, but I would the design and manufacture of secure hardware could be audited by multiple independent people. Further, in a conventional system it would be possible for a "sneaky" piece of code to do something and then delete itself without a trace, but if a piece of secure hardware isn't supposed to have a writable control store, such a thing would be impossible. Either sneaky code would have to be permanently in the control store, or the control store would have to have a permanently-wired means of modification, either of which would be detectable after-the-fact. – supercat Dec 08 '14 at 14:48
15

You're quite right; some developers will always need access to the Live data, if only to diagnose production problems. The best you can do is to limit the potential damage by reducing the number of people involved.

With great power comes great ... opportunity to really, *really* foul things up. 

Many developers won't want that responsibility and others, just won't be "ready" to hold it, and so shouldn't have.

Question: Why is your Development team performing Live releases?
I would suggest you need a Release Management "team" (even it that's just a subset of your team, plus Business representation to make any on-the-day "decisions", like "Go/No-Go")? This would remove much of the "need" for developers to touch anything Live.

Do you have any sort of non-disclosure/ confidentiality agreement between developers and company? It's heavy-handed, but it might have some merit.

Phill W.
  • 11,891
  • 4
  • 21
  • 36
  • Which is still won't stop determined wrongdoer from hiding backdoor in the application, but it does reduce the opportunity that makes the thief. – Jan Hudec Dec 04 '14 at 12:55
  • Yes, it is not the entire development team but rather a subset/release management team. We certainly have a clause in the employment contract about snooping around data you shouldnt be, it is a dismissable offence. – Clinton Bosch Dec 04 '14 at 13:06
  • @JanHudec Especially since adding code the the application leaves traces in version control. – CodesInChaos Dec 04 '14 at 15:13
  • @CodesInChaos: Good programmer can make backdoor look like an honest mistake. You'll suspect them, but you'll never make a case against them. But yes, it's another line of defence. – Jan Hudec Dec 04 '14 at 15:32
  • @Jan: Which is why all code changes should be reviewed and signed off before being allowed into the release branch. – SilverlightFox Dec 06 '14 at 11:51
9

The problem is your developers having access to the systems. If they need production data for debugging then give them a database dump where all that sensitive information is removed. Then the developers can work with that dump.

Deploying a new version should not involve any developer - that a pure admin task, or even better - a fully automated task. Also be aware of releasing and deploying being two very different tasks. If your process isn't aware of that then change it accordingly.

SpaceTrucker
  • 1,462
  • 10
  • 13
  • 1
    We do not need production data from debugging, we have a sanitised data dump for that, but sometimes a deployment requires various data migrations etc. which are run by some developers in the release management team (but they are still developers) – Clinton Bosch Dec 04 '14 at 13:24
  • 2
    @ClintonBosch Then you haven't clearly separated the roles of admins and developers. Then also one more question you should ask yourself is: how do we make sure that the software that is released also gets actually deployed? You would need to sign on release and only allow deployment of signed packages on production. Also again automation is your friend. Migrations shouldn't require any manual steps. – SpaceTrucker Dec 04 '14 at 13:47
  • 4
    @ClintonBosch Identify what data fields are highly confidential and encrypt them. Make sure that you put production OS security so that you can access which user ids are reading the key file to make sure nobody but the application user is doing this. Don't give the developers the app user password. Make them sudo to get rights on production and log what they are doing. This is probably the safest way to make sure that you are babysitting the few people that would have access and so that they can't casually or accidentally see data they aren't supposed to. – maple_shaft Dec 04 '14 at 15:17
6

Rule #1 of security: If someone has access to information, they have access to that information

That tautology is annoying, but it is true. If you give access to an individual, they have access to the data. For users, this usually means access control, but for developers... well... they're the ones that have to write the access control.

If this is a major issue for you (and it sounds like it is), consider building security into your software. A common pattern is to design secure software in layers. At the lowest layer, a trusted development team designs software which manages the most naked of access control. That software is validated and verified by as many people as possible. Anyone designing that code has access to everything, so trust is essential.

After that, developers can build more flexible access control on top of that core layer. This code still has to be V&Vd, but it isn't quite as stringent because you can always rely on the core layer to cover the essentials.

The pattern extends outwards.

The hard part, indeed the art of designing these systems, is how to build each layer so that developers can continue to develop and debug while still providing your company with the security you expect. In particular, you will need to accept that debugging demands more privileges than you think it should, and attempting to lock that down will result in some very angry developers.

As a side solution, consider making "safe" databases for testing purposes where developers can rip out all of the safety mechanisms and do serious debugging.

In the end, both you and your developers need to understand a key tenet of security: All security is a balance between security and usability. You must strike your own balance as a company. The system will not be perfectly secure, and it will not be perfectly usable. That balance will probably even move as your company grows and/or demands on developers change. If you are open to this reality, you can address it.

Cort Ammon
  • 10,840
  • 3
  • 23
  • 32
3

Set up two deployments of the application which also use separated database deployments. One is the production deployment and one is the test deployment.

The test deployment should only have test data. This can either be fantasy data which got created for that purpose or a copy of the production data which was anonymized to prevent the developers from finding out the real people and entities behind the data.

Philipp
  • 23,166
  • 6
  • 61
  • 67
  • Yes, this is exactly the scenario that we have. But at some point somebody needs to work on the production environment to facilitate a deployment/data migration – Clinton Bosch Dec 06 '14 at 16:32
3

In two financial firms, developers did not have access to production machines. All requests to modify production machines had to go through an approval process, with a script, and was approved by managers. The dev-ops team completed the actual deployments. I assume this team was employees only, and passed background checks. They also did not have developer knowledge so probably couldn't snoop if they wanted to. In addition to this, you would encrypt all database entries using a secret key stored in the environment variables. Even if the databases leaked publicly no one could read it. This key can be further password protected (PBKDF) so only an executive can unlock it. Your system could require the executive password upon boot (or more likely delegated to dev-ops or a dev-ops manager). Basically the strategy is to disperse the knowledge so a critical mass of required knowledge does not exist in one person and there are checks-and-balances. This is how Coca-Cola protects its formula. Honestly, some of these answers are cop-outs.

Chloe
  • 438
  • 1
  • 3
  • 10
-1

MongoDB has limited security controls and depends on a secure environment. Binding to a specific ip and port (and ssl since 2.2), and a crude authentication, that's what it offers. MYSQL adds GRANT o ON db.t TO... Data at rest is not encrypted, and ssl is not used by default. Create a fence. Readonly access for developers to application related log files should be enough to debug. Automate the application lifecycle.

Ansible helped us automate standard operations (deploy, upgrade, restore) over many single-tennant environments while using distinct encrypted vaults to store sensitive environment variables such as hosts, ports, and credentials. If each vault can only be decrypted by different roles, and only on a bastion host, for logged operations, then auditability provides acceptable security. If you grant SSH, then please use selinux to avoid key tampering, use a bastion host with ldap/kerberos authentication for administration, and use sudo wisely.

bbaassssiiee
  • 133
  • 4