-3

After a period of time in my life when I was making it a point of my honor to reject all programming principles and patterns I finally came to the conclusion that I was indeed being arrogant and ignorant... so I'd do best to actually learn the best practices of this craft.

Given this, when tasked by my boss to write an app, I tried to finally write a comprehensive test suite that would cover as much as possible of the expected behavior of this app. Given that it was the first time in my life I attempted this, doing so was taking me a long time - still, I was pressing on, having had it hammered into my head by many people far more experienced than me that most companies would outright reject all code not having such a comprehensive test coverage for this reason alone.

Then my boss came. And said something opposite... He stressed I was supposed to stop writing tests!

It's not that he does not want any tests. On the contrary, he requires them. However, he elaborated, he only wants tests that check if the application is, in general, working; testing all edge cases is 'not what we do here'.

I am surprised by this as this is contrary to all I've been taught. According my current understanding, what my boss wants is... the worst of both worlds?!

To check if the app is, in general, working, it is enough to simply launch it! Automated tests are required, as far as I understand, precisely to check the edge conditions that would be awkward to check manually. Even though writing tests takes time, it is supposed to pay in the future.

Given what my boss wants, as it (possibly fallaciously) seems to me, we still have to pay the (although diminished) costs of tests - the time to write them - while reaping almost no benefits, since the edge cases are not tested. If anything, this gives the company a false sense of security. Thus, when writiting these simplistic and very general tests he wants I can't shake off the feeling I'm wasting time.

I am also not supposed to optimize the tests so that they run shortly. He says the test suite is only being run from time to time. But I was always being told that the test suite must be performant since it is supposed to be run very frequently, each time one is making any modification to the code, so it should take a few tens of seconds at most! According to my boss it is fine if the test suite takes 10mins to run. Again, he said that running the test suite each time one makes any minor modification to the code 'is not what we do here'.

What am I missing here? What am I failing to understand? I'm just confused at the moment; I feel I'm failing to grasp the principles and goals of automated testing at all.

gaazkam
  • 3,517
  • 3
  • 19
  • 35
  • _"According to my boss it is fine if the test suite takes 10mins to run."_ You must have a very weak CI system setup. – πάντα ῥεῖ Nov 25 '19 at 18:53
  • 6
    Your boss probably wants to find you the right balance, which depends heavily on the specific kind of application. It makes a big difference if you create software for the next space shuttle mission or a software for generating christmas postcards. And the number and quality of tests you are writing should be seen in relationship to the money your company can loose in case of a bug. – Doc Brown Nov 25 '19 at 19:32

3 Answers3

3

As with many things... It Depends.

What's your application? Does it handle input from people who might be villains? Does it handle valuable data that villains might want to steal? What will happen if it crashes?

If your application only handles data from trusted people, and if it's no big deal if it crashes, then sure, you can skimp on the tests. And, indeed, there are a lot of applications like that.

But if your application might be attacked by villains, they will attack the edge cases, in the hopes of getting it to misbehave in interesting ways. You ask the user for their name... what happens if the answer is a megabyte long? Villains will do that kind of thing. You ask for their name... what happens if the value includes "%s"? Villains will do that kind of thing. Be very careful when you decide whether your input comes from trusted sources - all too often, the software that knows that it's dealing with an untrusted source hands off data to software that doesn't know that it's untrusted. A word processor is safe, because the user doesn't want to hurt themselves, right? But what about when they get a file from another user, or download a file from a web site?

What happens if your application crashes? Does it just say "oops", and the user restarts it, and all is well, or does the user lose hours (or days or years) of work? Could a failure cost real money, or cost lives?

Who are your users, and how many of them are there? If your application only crashes in obscure cases that a random user will only run into once a year, then if you have a million users somebody, somewhere, will have the application crash every thirty seconds. Will they call your support desk when the application crashes?

As for detailed testing versus module testing versus end-to-end testing... it can be very difficult to tell, from the module or end-to-end level, what the interesting edge cases are at the low levels. Also, when you take a low-level function and use it in a higher-level module, the higher-level module might not exercise all of the low-level function's cases. If you don't test the low-level function directly, you might have some case that isn't reachable from today's higher-level module. Tomorrow somebody changes the higher-level module, and it can now reach those cases and encounter previously-inaccessible bugs. Is the engineer who wrote the low-level function still available? Is there anybody available who is familiar with that low-level function? How risky will it be to fix the low-level function now that there are a bunch of higher-level modules depending on it?

What's the lifespan of the application? Is it going to be run a few times, and then discarded? Is it going to be run for three months? Three years? Thirty years?

As for how fast the tests should be... what's the functionality that you're testing? How many variables are there, and how expensive is the functionality to execute? If you're testing a process that takes an hour to execute, then it's kind of hard for your test to take less than an hour. If it only takes a second to execute, but there are a million interesting combinations, the test is going to take a couple of weeks. What's the goal of the test suite, and when is it going to be run? Is it a self-test that will run every time the end user starts the product? Are developers going to run it after every change, no matter how small? Is the test team going to run it once per release (assuming no failures!)?

Sorry to not give a definitive answer, but this is one of those things where you and your organization must look at your own situation and make your own decisions on the trade-offs involved.

Jordan Brown
  • 141
  • 2
  • Just to add a real world example of the answer: food and drug safety is rigorously tested, but paper towels and baseball caps are not put through the same rigorous tests because the danger of sending out a bad product is massively less impactful. – Flater Nov 28 '19 at 11:33
1

read up the concept of the test pyramid, example article here: https://martinfowler.com/articles/practical-test-pyramid.html

specifics differ from article to article (like naming and exact boundaries), but the general idea is that you want a broad basis of fast unittests, a middle segment of integration tests, and a small tip of end to end tests. Most non technical people focus ONLY on the end to end tests, that's the only thing they see. And end to end tests are by definition slow. Unit tests shall be fast!

The best advice I can give you is: While you still learn, do it on the side, or within a project where it's not so visible. Preferably Test Driven Development style. Adding tests for old code is extremly hard. Adding it for new, isolated code is easy! Once you got the principles of TDD, you can start writing tests for old code. There are a lot of great testing blogs out the (clean code from Uncle Bob, and goggle's testing on the toilet are my favorite)

Once you got some experience, I recommend https://www.amazon.de/Working-Effectively-Legacy-Robert-Martin/dp/0131177052

Benjamin
  • 154
  • 2
  • That's fun, because what you say is yet again contrary to what people who were hammering to my head that I absolutely *must* write comprehensive tests were saying. According to them unit tests were often a waste of time, since they had to change each time the code is refactored. Instead, they stressed module-level tests and end-to-end tests, which were supposed to capture the behavior of the app independendly of how it is written. – gaazkam Nov 25 '19 at 20:35
  • 1
    You should decouple your tests and your code under test. Sounds confusing? I recommend following article: https://blog.cleancoder.com/uncle-bob/2017/10/03/TestContravariance.html – Benjamin Nov 25 '19 at 20:40
0

Every company has different acceptance criteria for code that will be released. Your manager should clearly define their expectations on what should be tested, and the amount of time you should spend testing. Think of your tests like your features. You should clearly know your requirements before you start coding. How you will test the code will change the design. So all of these things should be considered up front.

That said, every manager I have worked for has used similar language but has expected very different results. Similar to TDD, and agile, each company may use the same language to describe their processes but the implementation will vary widely from place-to-place and even team-to-team. Some managers will define their testing criteria as no bugs, 100% coverage, they may have check-in metrics around this. Some places will hold you personally responsible for issues found outside of your testing. So you really need to have a good understanding of what is expected of you. Especially when working with older code without unit tests.

I highly recommend using tools like code-coverage analysis to help yourself. I often found myself trying to determine when I had tested enough. And like to have metrics around which parts of the code I have tested, conditions that were not tested, etc. Trying to write tests that inject bad data to hit conditions. If you absolutely have no time to develop tests, then you focus on the "Golden Path". The most likely used scenario. That should always work, without excuse, it's often the steps you performed to decide you had successfully implemented the feature. Then you can get into edge cases, or things that you believe will cause exceptions or non-desirable behavior.

Again, to keep to your question, you need to discuss with your manager. Some managers don't want you adding unit tests (batteries of tests for individual methods, focusing on every possible input-output), some do. In my experience, most managers at least expect some integration tests, tests that verify that a feature is working as expected, these tests can cross multiple functional objects. And again, detail may vary based on time.

I personally like to always put in some basic integration tests, that verify my feature is working, and calls into methods that other developers may modify. For example, if you're pulling data, calling other methods, etc. A few valid-input tests to cover yourself are always worth the effort (IMO). I've had many instances where people have modified code outside of my control, without my knowledge, and when a field issue arises you're to blame. Always have enough time to ensure your code is working, after things change. At the least so that when other people make modifications they will know in-development they broke something indirectly. Covering every possible edge cases depends on the time that is allowed, necessary, based on the project needs.

mminneman
  • 101
  • 1