1

Currently, I'm planning a new project of CI/CD with Azure DevOps (with Git, already committed) for an old Solution, which contains 17 C# projects.

Technically, we have access to the source code and it's required to write all Unit Tests (since it wasn't planned with them); however, as it's advised in this article:

Integration Testing made Simple for CRUD applications with SqlLocalDB

The best solution is to perform Integration Tests and not Unit Tests for several reasons:

  • It has been for a considerable amount of time in the market without major changes (Legacy System), more than 5 years.
  • There is not enough documentation of the entire system.
  • The support is limited to minor bugs.
  • It integrates several technologies like C#, SQL, ASP MVC, Console, SAP, etc.
  • Most of the people involved in this project are not working anymore; therefore, the business logic is minimal.
  • There would be thousands of cases to evaluate, which means a considerable amount of money and time.

I'd like to know if someone has any experience related or some advice on how to perform them, what way should you follow?

In my case, I'd like to focus specifically on the business logic like CRUD operations, but what should this involved? A parallel database for storing data? Any specific technology? xUnit, nUnit, MSBuild? Or how would you handle it?

P.S.

I see a potential issue with the previous article since it's using SQL Local DB and I read that is not supported in Azure and probably it's the same in Azure DevOps.

  • I still would like to understand the -1s since they are keep appearing and the comments are being deleted. I'm not new in Stack Exchange, but generally, when you have a -1 even the application asks you to leave a comment and the experts are not providing any input. As I said before **if I need to move to another forum** I'll freely move it and I can delete it and never ask again questions here since this place looks even more hateful than SO with 2 -1s already and no comment about it. – Federico Navarrete Aug 06 '19 at 19:24
  • "Does anyone have any experience with..." is a poll question. Poll questions are generally off-topic everywhere on the network. Focus on a specific, software design-related problem you are currently having, and ask a specific question about that. Questions that are too general cannot be satisfactorily answered here. Hate has nothing to do with it; voting is anonymous, and voters are not obligated to explain their downvotes. – Robert Harvey Aug 11 '19 at 16:18

2 Answers2

1

You need a test environment, where the solution can be executed without affection whatever is in production. If they have a procedure for that, somebody can do it for you, otherwise, you got to figure it out. And I suspect this case will be on the figure out side.


Set up the test environment

That test environment include a database. You can set up a local database for that purpose.

Ideally the solution knows the schema it needs... Although that is not common, so you will probably have to ask whoever handles the production database to export the schema so you can set it up in the test environment.

Next you need to figure out the connection. Ideally it is not hard-coded, so you can edit a configuration file to connect to your test database... Although, sometimes that is not the case. Well, first thing to change, I guess. Do the change on your branch – I'm assuming they have version control, if they don't... oh boy – that change should not going to production before you have your test environment (in case you broke something) working and the approval of the system admin (who needs to double check the configuration for the release environment).

Once it is connecting to the database, you need data. Ideally the solution can start with an empty database... Although, again, that is not common. See if there is any documentation of what the solution needs to start (usually it will be just a few entries, perhaps there is a script to run somewhere). If there is no documentation, you will have to figure it out from the error logs or messages. I'm assuming there are logs, if there aren't put "add error logs" to a TODO list or issue tracker, or kanban, or whatever they use. And document whatever the solution needs to run.

Note: you could have to do similar work for pther external systems. For example, is the code supposed to send email? Now you need a mail server to connect and a mailbox you can check programmatically.


Getting ready to write tests

All that was just setting up the environment. You have not written a single test yet. Although you are going to be doing integration tests, an automated test tool is a good idea (despite the fact that plenty of them say "unit" in their name). So pick one.

Oh, by the way, depending on the projects you could need a tool that allows you to move the mouse pointer and send keys, or perhaps a tool that allows to manipulate a browser programatically or something like that.

Your initial tests will treat the system as a black box. They check that you do something and something happens. Plenty of things could be idempotent. The usual recommendation is to undo during arrange (in Arrange-Act-Assert pattern), however, given that the system was not designed for testing, you could need a teardown after asserts. The AAA pattern is for unit tests, sometimes integration tests have to go out of line.


What to test

Test the documentation. There are documented requirements for the solution, right? Well, there is a chance there aren't, or there are in somebody's head. If there are, write tests to assert every part of them, including – and specially – the exceptional paths.

For the parts with no documentation, try to characterize the behavior of the system. You start with a guess of what it should do, based on experimentation, reading the code and talking to people. If you can figure out what it should do, write a test for that. If you can't figure it out... I suggest to also write a test, a test that passes when the system does what it currently does, and put it on a category apart so you know you were not sure next time you see it.

Test the interaction with other systems. If something should affect an external system, check it does.

Test everything that is exposed, try to increase coverage. You do not need 100% coverage. However, some coverage on all public parts is good.


I could go on refactoring or change management... but I would ramble a lot. I know, I wrote it and deleted it. So, hopefully I covered what you are interested in.

Theraot
  • 8,921
  • 2
  • 25
  • 35
1

Thearot is taking a practical and technical approach, I would like to focus on what to test first.

Ask yourself what you are afraid of to break. Your application is a relational database application, so everything can be measured by the state of your database.

I would create some data bases with states to start out with. Then perform some operations and then save the state of the database again. Now you have two states.

An integration test would involve attaching to the source-state database, perform the same operations and then compare the resulting database with the one saved earlier. If there is a difference, something has changed and you can easily find what's different using some compare tool.

You want to have a couple of these database pairs and a couple of tests focusing on different aspects so it won't be too hard to track what operations exactly are broken in case you find a mismatch.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57