7

I have been given the task of implementing some QA testing into a massive existing system. We're going to start out with system-level tests and might add unit tests if it is deemed necessary.

I don't really know where to start. I'm thinking of creating a new project in the main solution dedicated entirely to QA testing. From there, I'll just add system-level NUnit tests that can be run automatically from that project.

How does that sound? I want to make sure that I make the right decisions now so that the QA system doesn't get messy and overly complicated in the future. Since it will be running alongside a big system, I anticipate that it too will grow to be a big system, which makes it vulnerable to becoming a cumbersome mess of code.

Any suggestions would be great!

sooprise
  • 1,065
  • 2
  • 12
  • 17

3 Answers3

5

A good starting point is the IEEE standard on Software Quality Assurance Plans.

There are many factors to consider --

  1. How do you manage the QA process?

  2. What standards and conventions will your team follow?

  3. What are you metrics? (That is, what items will you measure and how will you measure and report on them?)

  4. What audits will you perform to assure that tests were perform, your metrics are valid, etc.?

  5. What set of documentation will you expect, and how will you review that those documents are correct?

  6. What is you overall test plan? (This includes test definitions, schedules, resources required, etc.)

  7. What test equipment, tools, test platforms, etc. will be used?

  8. How do you control the test equipment and test platforms (including calibration and configuration management of the test platform(s)?

  9. How are problems reported?

  10. How do you assign and resolve issues, and how do you track and manage this process?

  11. Configuration management and source code control plans.

  12. Media control for items you publish and distribute. This includes any printed material, material you distribute via CD or DVD, or software and documentation you make available via the web.

  13. If you are using subcontractors and suppliers, how are you managing and monitoring the quality of the product and material they are supplying you with?

  14. You plan for creating, maintaining, and retaining records of your activities.

  15. Training that will be required for team members.

  16. Risk Management Plan.

Jay Elston
  • 2,680
  • 22
  • 30
  • -1 to using the IEEE standard as a long-term strategy guide. Industry is moving faster then standards. Besides, that standard is pretty abstract. – Dmitry Negoda May 31 '11 at 17:40
  • 2
    I suggested using it as a starting point. The standard covers a comprehensive set of factors that should be considered when creating QA plans. Many of these factors are common to quality assurance in general and are adapted to software development. It is abstract because it merely specifies what should be considered, not any specific technology or process to be followed. You are absolutely correct about industry moving much faster than standards. That is why TCP/IP is no longer used :-) – Jay Elston Jun 01 '11 at 01:37
  • On other hand, knowing the IEEE standart won't hurt. I just knew a company which implemented that standard (among other IEEE standards), but didn't use any automation in their functional testing. That's why I don't beleive in the usefullness of the abstract approach taken in the standard. – Dmitry Negoda Jun 06 '11 at 07:43
3

You're probably going to want at least one unit test assembly per assembly in the system.

You could argue that it might be worth splitting business logic tests into multiple test assemblies, say "BllTests.Customer, CllTests.Product, BllTests.Basket" etc.

The worst thing you can do is stick them all in one big assembly, it's the equivalent of having your entire project in one assembly: it makes things much harder to manage. Your test feedback will likely come from a Continuous Integration server, so you're going to want pretty pinpoint accuracy of what test is failing and what part of the system the test impacts, so you can make decisions about the impact of the failure itself.

You DEFINITELY don't want the tests in the same assembly as any production code. If you did put them all in with the production code you're going to have to deploy a bunch of 3rd party assemblies to your production server that have no use for the product itself and just add another possibly point of failure!

EDIT

OK, so we're dealing with a monolith. This means you're unlikely to have interface abstraction (i.e. all the classes doing logic implementing an ILogicInterface, so you can test against interfaces instead of trying to run tests against concrete classes).

This is going to be a problem, you'll have to do tests without the standard approach of mocking the dependent interfaces (using something like Rhino mocks).

I'd suggest the best method for you is to start small: Make just a single assembly "BllTests.Customer". Then, slowly, move your implementation to inherit interfaces, starting right at the bottom of the code (abstracting your data access is a great start). Then unit test your small changes (we're talking about 4-5 methods inside a single class). Then pick the next small unit of logic and repeat. When you've got a decent chunk of interface-inheriting code: start splitting your code into separate assemblies: "Bll.Implementation" and "Bll.Interfaces".

Progressive enhancement is the name of the game, there's little point trying to unit test spaghetti code, your unit tests will either be very brittle or not test enough paths through the code to make them useful indicators of whether you have a code problem.

If you can't edit the main codebase (not always an option), well, I'd still suggest the multiple test assembly approach. Try pushing for all new code to be made in separate assemblies, to avoid your monolith becoming even bigger. But in the meantime, show how a set of small, coherent assemblies can be much easier to manage: lead by example.

Ed James
  • 3,489
  • 3
  • 22
  • 33
  • Ed, The massive system that I'm working with actually only has one assembly itself. Would using a multiple assembly test system still make sense under these circumstances? Also, just for extra clarification on what you're suggesting, from your post, I would go to Visual Studio and create a QA solution (separate from production solution), and a project for each test assembly? Did I understand your post correctly? Thanks! – sooprise May 31 '11 at 14:51
  • 1
    That's unfortunate, and probably going to make unit testing difficult (I've been in a similar situation myself). I imagine that if everything is in a single assembly you have little separation of concerns or use of interfaces to abstract functionality? I personally like to have the unit tests in the same solution, but there's no reason why you would have to do this other than it being easier to re-run them, since the dependencies will automatically update (you could use build events to update them instead if you like). I'll update my answer anyway.. – Ed James May 31 '11 at 14:56
  • Ed, when I create an assembly that is separate from the production assembly, is it ok if it is in the same solution? Also, this idea of abstracting interfaces is completely new to me, can you point me to a good resource to learn more about what you are talking about? – sooprise May 31 '11 at 19:42
  • Like I said, I like to have the test assemblies in the same solution as the code so you can easily run the tests when the code changes by setting a project reference. As for the abstraction via interfaces thing it's difficult to find a single resource but if you look on stackoverflow for c# questions about inversion of control and mocking you should get the idea! (this book is also excellent http://www.amazon.com/dp/1933988274/) – Ed James Jun 01 '11 at 11:34
2

I would listen to suggestions on separating out testing assemblies from production code, and also splitting testing assemblies into seperate components or concerns. Also a Continuous Integration server will help run your unit tests for every build and will report to you when unit tests are failing so that the build or tests can be addressed.

Just remember that a true unit test should be system independent and should test a single functionality assertion of a single component.

You mentioned system tests which test the integration of various components such as databases, web services, file resources, etc... that are specific to an environment and not necessarily to a single component. These are important too but they should be excluded from Continuous Integration as when they fail, they may not necessarily show a code or logic problem. These failures should primarily indicate an integration failure for a particular environment, (such as Database connection issues, or cannot communicate with a web service, or file not found, etc...)

maple_shaft
  • 26,401
  • 11
  • 57
  • 131