3

I have been tasked with designing a class library that I am loathe to actually build. It is basically a huge backdoor to our software security. The idea was that it would only be accessible from one terminal on a closed system. I plan to lock my code with a password, but is there a way to make sure that my library binary files cannot be added to a project without a password (or something to that effect)?

I am not looking for a full explanation, as this is probably not the correct medium for such a thing. I am more looking for what the topics I should be researching are called. My search-fu has not really turned up much yet, so I am looking for more keywords that can lead me in the right direction. A specific book or website would be even better.

Obviously, obfuscating my code as much as possible will also be a security practice I intend to employ. If that is my "best-case" practice for securing my code, so be it. I was hoping for something a little more, though.

  • Can just install another copy of your app which include the "back door" on that terminal? – Fabio Aug 15 '18 at 21:53
  • `but is there a way to make sure that my library binary files cannot be added to a project without a password` - no. https://www.jetbrains.com/decompiler/ that's a decompiler, that'll show any attacker the security measures you're attempting to put in place. Obfuscating it will serve as just a bit of a nuisance for an attacker, nothing else. – JᴀʏMᴇᴇ Aug 16 '18 at 13:23
  • @JᴀʏMᴇᴇ that's a fair assessment. I am trying to learn about how to throw up roadblocks to attackers. Anyone with enough time and energy to expend will be able to circumvent any security I throw up. I am just looking for resources for how to lock down a class library as much as possible. Based on the comments I have already received, the certificate, which is something that I was already planning to implement, is likely to be my best option. – InterstellarProbe Aug 16 '18 at 13:34
  • 3
    Reminder for those who still don't get it: downvotes are for bad questions, **not for bad ideas**. – Doc Brown Aug 16 '18 at 16:49
  • So is the security bypass going to be implemented in a back process that checks for a particular terminal? Or in a client program running on the terminal? – DaveG Aug 16 '18 at 17:49
  • Basically, I have been poking around the system, looking for vulnerabilities. I found a group of them, all of the same type. I came up with a possible solution for how to "plug the holes". I sent my proposal up the chain, and it was approved with very little conversation. Once we started having conversations about it, it became apparent that it is likely a good approach for plugging the holes, but if the process we are designing is ever compromised, it becomes a massive vulnerability itself. Everything is running server-side. The biggest concern is... – InterstellarProbe Aug 16 '18 at 18:25
  • ...software development consultants who work in our development environment, and also security audits. – InterstellarProbe Aug 16 '18 at 18:26

4 Answers4

6

As a solution you can add code in your class initialization routine that basically would check for certain conditions and would fail if the conditions are not met. For example, check if a certain certificate is installed on the machine where the code is running:

https://stackoverflow.com/questions/6451658/check-if-end-user-certificate-installed-in-windows-keystore

Vlad
  • 192
  • 5
  • 4
    Of course, the hackers will decompile the code and change those conditions. Or they may do it even without decompiling. – Frank Hileman Aug 17 '18 at 01:15
5

IMHO you are approaching this from the very wrong side.

The idea was that it would only be accessible from one terminal on a closed system

Then make sure all software installed on that system can only be accessed by authorized personal. Make sure noone can "steal" the lib from that terminal by closing any holes like USB ports, unrestricted file system access, unrestricted file upload.

Make also sure the source code of that lib is only available to you and other authorized developers.

When you do this right, you won't need such an unsecure approach like a password protection with hard-encoded credentials.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • I take your point. But, where the code will ultimately reside and how it is accessed is only tangentially within my control. Hence my desire to learn as much as I can about ways to secure access programmatically as well as physically. – InterstellarProbe Aug 16 '18 at 11:19
  • 1
    @InterstellarProbe If you cannot control who can access the source code then trying to restrict the binaries is a waste of time and effort. Anything you secure programmatically can be unsecured by changing the source code. – Stop harming Monica Aug 16 '18 at 11:49
  • @Goyo When I say it is only tangentially within my control, I mean that other people in my department are responsible for code deploys. I can suggest where to put it. I can suggest how to secure it. But, ultimately, I will not be the one who actually deploys the code. So, I am trying not to worry about the things I cannot control and focus on the things I can. I want to learn as much about code security as I can before development begins. In my experience, if you do not begin a project with specific security goals in mind, they are extremely difficult to achieve later. – InterstellarProbe Aug 16 '18 at 12:33
1

.. a class library ... basically a huge backdoor to our software security.

Have you raised these concerns with those Responsible for Security?

The idea was that it would only be accessible from one terminal on a closed system.

A library can be used in any application, from anywhere.

I plan to lock my code with a password

... which puts that password into Source Control therefore reducing it to just another String variable.

... is there a way to make sure that my library binary files cannot be added to a project without a password (or something to that effect)?

Not easily. If anything, you're getting into an area very much Licencing the use of that library.

If this functionality must only be used in one application from one machine, and assuming that those Responsible for Security don't shoot the whole idea down in flames (which they probably ought to) and you do have to write it, then embed the functionality directly into the target application. Don't create a library at all, thereby preventing it from being reused.

Phill W.
  • 11,891
  • 4
  • 21
  • 36
  • I'm likely taking an alarmist view as my mind reels at the possibilities of how the code could be misused. I have voiced my concerns and been told by the security consultant that it is an acceptable risk (the code will provide several additional security measures that will actually make the system more secure, so long as no other coders gain access to it). Basically, this code will be responsible for closing multiple points of vulnerability, but in the process become a single point of vulnerability. – InterstellarProbe Aug 16 '18 at 11:24
1

It sounds like you are attempting "security through obscurity". You are right to be highly skeptical of this idea, because it doesn't work. Instead you should be thinking about authentication and authorization. How do I authenticate users so that I know who is attempting an operation? How do I authorize users to perform some operations and not others?

This is a very old problem, and there are well-known solutions in use today. If the protected resources are highly valuable, then look into two-factor authentication using one-time key generators (e.g. Yubikey). There are multiple open-source solutions for maintaining authorization data.

kevin cline
  • 33,608
  • 3
  • 71
  • 142
  • I looked into multi-factor authentication, but I am more worried about subversion than I am about impersonation attacks. I do appreciate the advice, and we certainly use multi-factor authentication elsewhere. – InterstellarProbe Aug 16 '18 at 18:32
  • So what is your threat model? – kevin cline Aug 17 '18 at 07:05
  • I am not a security expert. I was tasked by the security team to look for vulnerabilities, but I have no idea what threat model they are using. I have heard the term VAST thrown around, but I am not sure what that means for the larger security context. I am learning as I go. Internally, I am the only software developer. Most of the software development happens with contractors. So, I see little pieces of everything, and as time goes by, I start to understand how everything fits together into the whole. I have not been at my position long enough to be an expert yet. – InterstellarProbe Aug 17 '18 at 13:12
  • So what resources are you trying to protect, and who are you trying to protect against? Malicious actors outside your organization? Inside? What attacks are you trying to prevent? – kevin cline Aug 22 '18 at 05:19
  • I'll answer the second question first. Prior to my arrival, all development was being maintained by a single contractor. As quality declined, my company hired an internal IT Director. She realized that the contractor was not following industry standards, so we are in the process of gaining control of our code. That means limiting our vendor's access to our systems. Prior to my arrival, their access to production data was finally taken away from them. However, we have discovered they are still accessing our production server and making changes when something breaks. – InterstellarProbe Aug 22 '18 at 13:08
  • So, we asked them not to do that, and so far they seem to be complying. But, I was asked to lock it down because the last time they accessed our production server, they made a mistake and deployed the wrong code. I am the only in house developer, and I am not a security expert. So, as I find areas that can be exploited, I try to come up with solutions to lock them down as much as possible. The solution that this post is based on closes a bunch of security vulnerabilities to our production environment. However, it is also a single point of weakness where if anyone ever finds it, it becomes a – InterstellarProbe Aug 22 '18 at 13:12
  • roadmap to how to circumvent our security. – InterstellarProbe Aug 22 '18 at 13:12
  • I don't understand how any of your problems controlling contractor access to your production systems could possibly be managed by controlling a software library. How are they accessing the system? All common entry points require authorization. Just revoke it. – kevin cline Aug 23 '18 at 08:07
  • You are approaching this from the point of view that we have a full IT team. We do not. We still need them to be in our systems. We need to limit what specifically they can access in our systems and what they can do. – InterstellarProbe Aug 23 '18 at 13:41