10

At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security.

I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing.

I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless.

I imagine the life-cycle would be something like:

  1. Find gap in security
  2. Exploit gap in security
  3. Procure payload
  4. Utilize payload

What kind of differences (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

StuperUser
  • 6,133
  • 1
  • 28
  • 56
David Kaczynski
  • 1,376
  • 1
  • 10
  • 28
  • 4
    who says there is any formality in hacking whatsoever – ratchet freak Oct 15 '12 at 13:42
  • 1
    Dang, four good answers already. It's going to be hard to pick just one. – David Kaczynski Oct 15 '12 at 13:55
  • @DavidKaczynski you could also consider asking this on [security.se], to get the viewpoint of those actually designing the various types of software. And there are big differences, depending on the security requirements... – AviD Oct 15 '12 at 16:21
  • @AviD thanks, I think I got some excellent replies here in regard to the fact that development life-cycles for invasive software are not inherently different. I would like to learn more about the goals or options of invasive software once security is breached, like infecting the computer with a virus, creating a backdoor, or imitating a user to obtain data. – David Kaczynski Oct 15 '12 at 16:26
  • 1
    @DavidKaczynski but my point is that it *is* inherently different - or rather, developing one type is different from another type. See e.g. Terry's answer as an example, and compare those further to viruses, and again to zero-days, and again to Stuxnet, and... Some would be properly engineered, some are thrown out overnight, depends on the different context and requirements. – AviD Oct 15 '12 at 16:32
  • @AviD I think I see. It seems to me that the life-cycle for developing a thorough, well-engineered piece of software is not dependent on it being a virus or not. However, if the exploit is time sensitive (such as a zero day attack), then life-cycles be damned, it gets put into production as soon as it's working. – David Kaczynski Oct 15 '12 at 16:38
  • Exactly. There is of course the other aspect, which you mentioned, regarding maintainance - if it is not intended to have a long lifetime, that's a lot different from something that is still a product to be reused, versioned, tested, etc. Of course weaponized exploits (e.g. stuxnet) would be different from both, crimeware different yet, and so on. – AviD Oct 15 '12 at 16:50
  • @ratchetfreak I'm sure in general there isn't but some place that does it professionally like the NSA may have a method to their madness. – Rig Oct 15 '12 at 17:17

5 Answers5

7

What type of code are you talking about?

There are many security tools used in the process of hacking, including scanners like nmap, sqlmap, Nessus and many others. I would imagine they have the same type of software life-cycles like any other applications.

On the other hand, there are exploit codes. Codes written to take advantage of a very specific exploit & situation. I very much doubt those need any life-cycle at all. However, many exploit codes are also integrated with a larger exploitation framework like Metasploit.


After a discussion with @AviD, I would like to add in a few points.

It will be very different for specific situations.

Some exploit codes might be rushed out to take into account the window before the zero-day is patched. Code might be rushed out for other reasons as well. See: CRIME - How to beat the BEAST successor? for a great example of this. A person wrote a piece of PoC code to quickly prove his point. No software lifecycle methodology is taken into account for codes like this.

Weaponized malware like stuxnet or FLAME probably do. Packaged software like Metasploit do.

So the right answer is... it depends.

Ayrx
  • 248
  • 2
  • 10
  • We have not had a formal meeting yet to discuss goals or possible avenues of breaching security, so I cannot say what type of code we would be developing (or if we would use existing software/technology to meet our goals). I am still interested to learn what types of formal techniques there are to take advantage of a compromised system, like creating backdoors, imitating users, infecting the computer with a virus, etc. I suppose that type of question may be more suited for [IT Security](http://security.stackexchange.com/) – David Kaczynski Oct 15 '12 at 16:30
3

I don't see why there should be any specifically different development life-cycle depending on the purpose of the product.

Software that is developed to breach security can have as long a life as any other type of software and will require the same amount of maintenance and work.

Different creators of such software will adopt different life-cycles depending on their needs.

Oded
  • 53,326
  • 19
  • 166
  • 181
3

The development models that you specify are just that - development models. They are extremely useful when you are doing engineering development - when you have requirements, when you have to create or modify system architectures or component designs, when you need to build or modify a product and associated tests, and when you release to a customer.

I'm not sure that these models can be directly applied to more research-oriented projects, where you are trying to answer questions or learn more about a system (or the system's security weaknesses, in your particular case).

I would suspect that the iterative/incremental models, such as the agile methods and the Spiral model would be the most useful to form a basis. In each iteration, you can work toward answering questions or defining more parameters to work with, which might or might not include writing any code. Perhaps various scientific research methods might also provide an interesting foundation.

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
1

Life-Cyle is never code dependent. It is rather dependent upon other factors like:

  1. Time
  2. Budget
  3. Nature of Customer
  4. Nature of Product

In your scenario, the Agile Life Cyle methodology would be most useful. Reason being that you need to involve your customer during development and have to verify the acceptable quality parameters of your product. Agile Methodology would help you immensely to improve your Hacking Software via gathering your customer's feedback and then gradually working on incremental basis.

Maxood
  • 1,503
  • 2
  • 11
  • 19
  • This seems a little subjective. Are you suggesting that other Lifecycle methods DON'T involve the customer during development or verify acceptable quality parameters? Of course that isn't unique to Agile. – Jay Stevens Oct 15 '12 at 17:33
1

Hacking has recently seen a strong professionalization, away from single hackers doing it "for the lulz" or to gain fame, towards collaboration between specialists with the goal of making money. The result have been fully-fledged commercial "hacking kits" like the Blackhole exploit kit where specific software weaknesses can be easily integrated like plugins. I'd assume that such products are developed pretty much exactly like any other software products.

There is also apparently a developing market for zero-day exploits.

Michael Borgwardt
  • 51,037
  • 13
  • 124
  • 176