29

Imagine a simple AngularJS REST Service which retrieves (GET) data from REST endpoints on a server. It maintains no state of its own and each method only passes back a promise to whomever is using said service.

Now, should I write tests for this service? If I do, what exactly have I tested other than my mocked httpBackend? I suppose I could test that the interface exists as documented but that isnt really anything I would expect to break.

I am quite new to this (testing) and would appreciate any advice those more experienced than I could contribute.

This question is not asking how to unit test a web service, it is more focused on what should I test, especially when the client service has little to no state of its own.

This question is a duplicate of Is it Worth Unit Testing an API Client which also contains a more appropriate answer.

mccainz
  • 439
  • 4
  • 9
  • 7
    remember, unit tests aren't the only tests at your disposal! this sounds like a good candidate for some integration tests. – sara May 25 '16 at 18:45
  • 1
    Strangely, that question seems to be largely about mocking a database. – candied_orange May 25 '16 at 18:49
  • @gnat , I had read that question before asking this one. My question is more about what-to-test rather than how-to-test. – mccainz May 25 '16 at 18:50
  • Extending @kai point it may be interesting to have some performance or concurrency testing – Borjab May 26 '16 at 08:25
  • 1
    The more "pointless" you think a test is, the more important it probably is. I can't tell you how many times I have seen a test failed, that should "never fail". – doug65536 May 26 '16 at 09:24

6 Answers6

37

Yes.

In this case, simply ensure that the web service returns whatever data the mock provides.

There is value in doing this, even if it seems trivial or boring. What if someone later adds logic that changes the data? Boom: failed test. Now you have a discussion about whether the web service or unit test needs to be updated. That is better than troubleshooting data problems in a production environment, and maybe you do not look at the web service "because it just returns data verbatim." Until it does not.

  • But wouldnt the failed test have to occur in some scenario in which I am using the mocked service in league with a controller? Or, are you saying the unit test for the service should validate the JSON etc returned from the mock backend? – mccainz May 25 '16 at 19:02
  • 3
    Unit tests are for one unit of code. You are thinking of an integration test. Simply call the web service with a mock back-end controlled by the test. The unit test knows the inputs and expected outputs. Compare: that is the unit test. –  May 25 '16 at 19:11
  • 3
    I am trying to make sure I understand your scenario, as it appears to me you are stating that my service should be verifying the structure of the mocked data. This is exactly the exercise that seems futile to me and led me to this question. The service will never even touch this data other than to resolve it in the context of a promise. – mccainz May 25 '16 at 19:14
  • Client also get responses right? These are the scenarios to test. Successful and also fails. Mock wrong responses. It will help you to make a solid client. It also helps to try unexpected responses hard to replicate in run time. So yes, I would do them. – Laiv May 25 '16 at 19:17
  • Ok, I think I am tracking with the intent now. Going to let this question brew another 24 hours. – mccainz May 25 '16 at 20:45
  • @mccainz is right with the addition of what if something gets between the client and the server? If the wrong data is returned that might be a reason to throw an error or halt all operations. Hackers love sites that assume that any data being passed from the front end or coming back from the DB *must* be valid. – Raydot May 25 '16 at 22:49
  • 2
    @mccainz: "The service will never even touch this data". Fine, so write a unit test asserting that the service doesn't touch the data. Not that the data from the mock is "correct", just that the data from the unit is identical to the data from the mock. Now you're testing two things: that the unit passes on data, and that it doesn't interfere with it. If anyone in future alters the unit so that it fails to pass the data on, or it modifies the data somehow (maybe they put it in and out of some persistent storage which accidentally transcoded it), your tests will catch that. Job done. – Steve Jessop May 26 '16 at 09:42
  • In fact, you could take this to the extreme of asserting that this unit must pass on data from the mock *even if it is not well-formed JSON*, and include a case testing that. This is an alternative to what Dave Kaye is talking about, the possibility that you might in future want this unit to do some kind of validation. If validation of the data format is explicitly someone else's responsibility and this unit has the single responsibility of hooking up to the remote server independent of data format, all you need to unit test is that it hooks up to the mock given it. – Steve Jessop May 26 '16 at 09:48
  • 2
    I appreciate your use of the word boom. – Price Jones May 26 '16 at 11:55
  • @mccainz There are also a lot of ways I can think of offhand to make this "simple passthrough" fail. What if I ask for 1GB of data? What if the network is slow? What if I make 1000x requests a second? What is the encoding of the data, will this ever matter to your passthrough? What about malformed data? – enderland May 26 '16 at 13:16
  • As I understand this answer now it seems to state that I should load my client service with precondition checks. If this is true then this is an exceedingly bad idea and seems to conflate the client API with a data validation service. I am finding this answer more exactly matches my question http://programmers.stackexchange.com/questions/252748/is-it-actually-worth-unit-testing-an-api-client – mccainz Jun 01 '16 at 15:11
8

Before you get the answer you are looking for, you need to decide on where you/your company stands in the spectrum of Testing:

  • On the far right is something like Test Driven Development, which says for every line of code you write, you must have a failing test that some change or new line of code can fix.
  • Somewhere in the middle is where other schools of thought are, they treat code as a black box and test only the value that comes out.
  • On the far left you have no tests, or sparse tests.

With that aside, what you are talking about testing is a boundary, something between your code and someone else (Web Server, Database, etc.).

Ask yourself, would it be valuable to have tests for the code that receives some model (JSON?) coming from the web server, and translates it to a data model that you can more easily work with (a JavaScript model, or POJO?)? To me it would be somewhat valuable, but not important as testing my inner layers (DataLayer, NetworkLayer, Business Logic, etc.). And what are the costs of maintaining those tests. Even at my most hardcore TDD work places, we had boundaries that we did little or no testing beyond.

One of the great things about well written tests is they help you find what is causing a crash or bug quickly. However, something that close to a boundary, if the code is SOLID, could already be easy to find the problem; it would probably only break when the 3rd party code changed the JSON coming back - not within your code. So to me a test would not be merited

Two thoughts in closing:

You don't need TDD/tests to have good architecture, but good TDD & tests ensure (enforce) good architecture.

Testing on the boundaries often gets messy and hard to maintain. In general we have focused on writing good tests on the inner layers (DataLayer, NetworkLayer, Business Logic, etc.).

Beltalowda
  • 103
  • 4
Herbal7ea
  • 204
  • 1
  • 1
  • 1
    "which says for every line of code you write, you must have a failing test that some change or new line of code can fix." That's a weird way to put it, I feel. TDD is just writing your tests as mirrors of your requirements/checkers for what the behavior is supposed to be, and then you write your code such that it satisfies those tests. So if something changes in the requirements, you change the tests first and then modify your code until the tests all pass again. You can still refactor and modify your code (for performance, ease of maintenance, etc.) as long as it doesn't cause regression. – JAB May 25 '16 at 21:21
  • 4
    I appreciate this answer, but disagree with its conclusion. Integration tests are very valuable exactly because testing your inner layers well will push your bugs out into these boundaries. Having high level tests that validate the *system* works together as a whole is just as valuable, if not arguably more so, than the unit tests themselves. – RubberDuck May 25 '16 at 23:16
  • @JAB "which says for every line of code you write, you must have a failing test that some change or new line of code can fix." -> I am just referring to the Red/Green/Refactor process. For me, TDD is all about working through the design of the problem (literally, after jotting down thoughts on what I think is needed, my tests set the expectations, and dictate a lot of what my code should be). I like what you said about mirroring, because my tests and code feel so closely connected when using TDD. – Herbal7ea May 27 '16 at 15:14
  • @RubberDuck I did not say don't do any Integration Tests (ITs) [though the question is about Unit tests]. I do think ITs have value, but ITs are of lower importance in the testing pyramid. Issues with boundaries can still be detected as you go up the layers. They can also be discovered through regular use of the product. There is a cost to every test , part being how often they need to be changed. Again, people need to decide on where they are on the test spectrum in order to determine what level of testing they are committed to. – Herbal7ea May 27 '16 at 15:52
  • Given unlimited resources and time, sure, test everything. Unfortunately, there will always be boundaries where the value you get out of tests diminishes, and things become difficult to maintain. I refer to Uncle Bob's Humble Object design pattern, which is used to insulate against these boundaries. So, I suggest that people decided where those boundaries are, and insulate against them. – Herbal7ea May 27 '16 at 15:52
  • Integration tests are absolutely **not** a lower priority than unit tests, there are just *less* of them. having less of them doesn't mean they're a lower priority. You seem to have completely missed my point. Also, users are not QA. Don't treat them as such. It's unprofessional. – RubberDuck May 27 '16 at 15:56
  • @RubberDuck I have said nothing about QA or making the user QA. I have not covered the whole development team (vs. just testing), of which a QA person or team is an important part. I hope to one day have that job where the whole testing stack is as highly valued as your view. Unfortunately, mine is influenced by deadlines, budgets, tools, and varying levels of experience across the team, all of which affect or limit the level of testing. I have taken pragmatists view, after realizing it is a spectrum; people need to figure out what fits for their tools, platform, and Business needs. – Herbal7ea May 28 '16 at 18:33
  • @Herbal7ea I don't have unlimited resources either. I also have taken a pragmatic view. Well written integration tests can get you a lot more bang for your buck in way of code coverage and bugs caught before QA. That's all I'm trying to get across here. – RubberDuck May 28 '16 at 18:39
5

Say you just hired me. Say I update 5 different things at once. Now something is broken. It happens to be this AngularJS Rest service. But we don't know that. What test could you have written in the past that would help us diagnose the problem today?

You can mock your REST endpoint. Something stable that just gives the service something to talk to that produces predictable responses.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
  • 2
    "What test could you have written in the past that would help us diagnose the problem today?" ... I like that. It's simple and sticks to the roof of my brain like peanut butter. In fact, it is now on my whiteboard. – mccainz May 26 '16 at 13:39
5

The tests should be relatively easy to write then. However, through the process of writing tests, you often find out it's not as trivial as you thought. There are often boundary or race conditions you miss that come to the surface during writing automated tests, conditions that are really difficult to hit in a production environment.

Also, if you end up slowly adding features to this library, as is inevitable, there's going to come a point where you suddenly realize you'd really like unit tests, and it will be difficult to add them then. Doing it from the start is close to zero, some even say negative, marginal cost.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
0

Test boundary cases. What if the returned data is huge? Empty?Is garbled somehow? Never returns at all?

Failing to test every point where assumptions you are making could be wrong is how things like buffer-overrun security bugs happen.

keshlam
  • 223
  • 1
  • 5
0

I would look for boundary cases where your system in fact fails to "just pass through" the data. For example, we had such a system that worked fine until it received a value that could be expressed as a 32-bit unsigned integer, but when written in decimal and read by a Java (where all integers are signed) program, failed.

Does the system handle nulls correctly? Does it fail gracefully if a connection goes down? Etc.