12

We are developing an application that goes through many testers before reaching our client.

Finally, when it reaches the client, they find some more bugs and report to us, and this has become a tedious process. There are some bugs which I personally can't fix, because it requires me to modify most of the inner code, and I am not even sure that it might work.

Questions:

  • Why do bugs get reported even after going through so much testing? Is it our requirements' issue?

  • Our client doesn't seem happy for anything we provide. Are we doing something incorrect?

  • Has anyone developed any application that were totally bug free? What is the process? Why can't we deploy the application with minor bugs? Are we supposed to be perfectionist?

  • Is the current scenario the correct process of development and testing? If not, what is an efficient way where developers, testers and client get the maximum benefit together?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
BoredToDeath
  • 367
  • 2
  • 6

4 Answers4

19

The closest you get to a bug-free application, the more expensive it gets. It's like targeting 100% code coverage: you spend the same amount of time and money getting from 0% to 95%, from 95% to 99% and from 99% to 99.9%.

Do you need this extra 0.1% of code coverage or quality? Probably yes, if you're working on a software product which controls the cooling system of a nuclear reactor. Probably not if you're working on a business application.

Also, making high quality software requires a very different approach. You can't just ask a team of developers who spent their life writing business apps to create a nearly bug-free application. High quality software requires different techniques, such as formal proof, something you certainly don't want to use in a business app, because of the extremely high cost it represents.

As I explained in one of my articles:

  • Business apps shouldn't target the quality required for life-critical software, because if those business apps fail from time to time, it just doesn't matter. I've seen bugs and downtime in websites of probably every large corporation, Amazon being the only exception. This downtime and those bugs are annoying and maybe cost the company a few thousands of dollars per month, but fixing it would be much more expensive.

  • Cost should be the primary focus, and should be studied pragmatically. Let's imagine a bug which is affecting 5 000 customers and is so important that those customers will leave forever. Is this important? Yes? Think more. What if I say that each of those customers is paying $10 per year and that it will cost almost $100 000 to fix the bug? Bug fixing now looks much less interesting.

Now to answer your questions specifically:

why do bugs get reported even after going through so much testing? Is it our requirements issue? Our client doesn't seem happy for anything we provide? are we doing something incorrect?

Lots of things can go wrong. By testing, do you mean actual automated testing? If not, this is a huge problem on itself. Do testers understand the requirements? Do you communicate with the customer on regular basis—at least once per iteration, at best the customer representative is immediately reachable on-site by any member of your team? Are your iterations short enough? Are developers testing their own code?

Similarly to They write the right stuff article linked above, take a bug report and study why this bug appeared in the first place and why was it missed by each tester. This may give you some ideas about the gaps in your team's process.

In important point to consider: is your customer paying for bug fixes? If not, he may be encouraged to consider lots of things to be a bug. Making him pay for the time you spend on bugs will then considerably reduce the number of bug reports.

Has anyone developed any application that were totally bug free? What is the process? Why can't we deploy the application with minor bugs? Are we supposed to be perfectionist?

Me. I've written an app for myself the last weekend and haven't found any bug so far.

Bugs are only bugs when they are reported. So in theory, having a bug-free application is totally possible: if it's not used by anyone, there will be nobody to report bugs.

Now, writing a large-scale application which perfectly matches the specification and is proven to be correct (see formal proof mentioned above) is a different story. If this is a life-critical project, this should be your goal (which doesn't mean your application will be bug-free).

Is the current scenario the correct process of development and testing? If not what is an efficient way where developers,testers and client gets the maximum benefit together?

  1. In order to understand each other, they should communicate. This is not what happens in most companies I've seen. In most companies, project manager is the only one who talks to the customer (sometimes to a representative). Then, he shares (sometimes partially) his understanding of the requirements with developers, interaction designers, architects, DBAs and testers.

    This is why it is essential either for the customer (or customer's representative) to be reachable by anyone on the team (Agile approach) or to have formal communication means which authorize a person to communicate only with a few other persons on a team and to do it in a way that the information can be shared to the whole team, ensuring that everyone has the same information.

  2. There are many processes to do development and testing. Without knowing precisely the company and the team, there is no way to determine which one should be applied in your case. Consider hiring a consultant or hiring a project manager who is skillful enough.

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • 1
    +1. Before even starting a project, you need to have an understanding of what is "good enough for release" and build accordingly. – Julia Hayward Dec 12 '14 at 10:44
  • @JuliaHayward Couldn't agree more. The end game here isn't zero defects - it is producing functional software that adds value in a timely fashion. – Robbie Dee Dec 12 '14 at 10:47
4

Not all bugs are created equal so you need to sort out the wheat from the chaff.

Expectations

Many bugs are raised simply due to a shortfall in what the software does and what the end user is expecting. This expectation comes from many areas: using other software, incorrect documentation, over-zealous sales staff, how the software used to work etc etc.

Scope creep

It goes without saying that the more you deliver, the greater the potential for bugs. Many bugs are simply raised on the back of new features. You deliver X & Y but the customer says that on the back of this it should now also do Z.

Understand the problem domain

Many bugs come about for the simple reason that the problem domain was poorly understood. Every client has their own business rules, jargon and ways of doing things. Much of this won't be documented anywhere - it will just be in people's heads. With the best will in the world, you can't hope to capture all this in one pass.


So...what to do about it.

Automated unit tests

Many bugs are introduced as an unexpected side effect of some code change or other. If you have automated unit tests, you can head off many of these issues and produce better code from the outset.

Tests are only as good as the data supplied - so make sure you fully understand the problem domain.

Code coverage

This goes hand in hand with automated unit testing. You should ensure that as much code is tested as is practical.

Learn the lessons

Madness is doing the same thing again and again and again and expecting different results

Do you understand the causes of the last failure? Do you? Really? You may have stopped the problem occurring but what was the true root source? Bad data? User error? Disk corruption? Network outage?

Nothing annoys clients more than encountering the same problems again and again without progress towards some form of resolution.

Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
0

Defects have existed from the beginning of software development. It's hard to tell from your question to what extent and what severity the defects effect the usability or functionality.

Defect-free programs exist, but just about any non-trivial system will have defects.

You will have to decide upon some sort of prioritization and likely will have to do some study of the cause of the defects - where they were introduced. There is far too much to discuss about such things in a simple Q&A post.

Entire books have been written about causal analysis and fixing process for an organization that has quality problems.

So my recommendation is to: (in no particular order)

  • Implement a defect tracking system if you have not found one already
  • Determine a way to classify the severity of defects
  • Figure out why you are not meeting customer expectations (is it the developers, the QA, the customer, etc)
  • Learn about some exercises like the 'Five whys' and do similar investigation into some of the causes of your defects.
Tim
  • 946
  • 7
  • 9
0

Depends on what you call an application.

If you mean, an interactive program where you need to be certain that the real-time behaviour is exactly such and such under any given circumstances, then it's basically impossible to proove there aren't any bugs in it. I suppose it would be possible if you could solve the halting problem, but you can't.

However, if you restrict yourself to a statement of "such and such input will eventually yield such and such final state", then your chances of a "bug-free proof" are better, because you can use invariants. That, and only that, allows a correctness proof to be broken down in subproblems, each of which can relatively easy be prooven to work correctly under all circumstances of the remaining program (though you generally can't be very accurate about how much time & memory it might take).

Such techniques are basically possible in any programming language (though some esoteric ones like Malbolge try to disproove that!), but in all imperative languages it gets messy very quickly, because you have to meticolously keep track of a lot of implicit program state. In functional languages1, the proofs tend to look much nicer (pure languages, or the purely-functional subset of a language). Still, in particular with dynamic types, you will need to write out a lot of requirements about what inputs are permitted. That's of course one of the main benefits of strong static type systems: the requirements are right there in the code!
Well, ideally, that is. In practise, O'Caml or even Haskell programs tend to contain nontotal functions, i.e. functions that will crash / hang / throw for certain inputs, despite the correct type2. Because even though these languages have very flexible type systems, it's sometimes still not feasible to use it to fully restrict something.

Enter dependently-typed languages! These can "calculate" types precisely as needed, so everything you define can have exactly the type signature that prooves all you need. And indeed, dependently-typed languages are mostly taught as proof environments. Unfortunately, I think none of them is really up to writing production software. For practical applications, I think the closest you can get to completely bug-proof is writing in Haskell with as thoroughly total functions as possible. That gets you pretty close to bug-proof – albeit, again, only with regard to the functional description. Haskell's unique way of handling IO with monads also gives some very useful proofs, but it generally doesn't tell you anything about how long something will take to finish. Quite possibly, something might take exponential time in particular circumstances – from the user's POV, that would likely be as severe a bug as if the program completely hangs.


1Or more generally, descriptive languages. I haven't much experience with logical languages, but I suppose they can be similarly nice in proof regards.

2If it's not the correct type, the compiler will never allow it in those languages; that already eliminates a lot of bugs. (And, thanks to Hindley-Milner type inference, it actually makes the programs more concise as well!)

leftaroundabout
  • 1,557
  • 11
  • 12
  • "If you mean, an interactive program where you need to be certain that the real-time behaviour is exactly such and such under any given circumstances, then it's basically impossible to prove there aren't any bugs in it. I suppose it would be possible if you could solve the halting problem, but you can't.": I am not sure whether this statement is correct. It is impossible to verify an **arbitrary** program, but what about a program you have written in a way that allows such a verification? – Giorgio Dec 12 '14 at 12:00
  • See e.g. http://www.cs.cmu.edu/~rwh/smlbook/book.pdf, at the beginning of page 198: "Finally, it is important to note that specification, implementation, and verification go hand-in-hand. It is unrealistic to propose to verify that an arbitrary piece of code satisfies an arbitrary specification. Fundamental computability and complexity results make clear that we can never succeed in such an endeavor. Fortunately, it is also completely artificial. In practice we specify, code, and verify simultaneously, with each activity informing the other." – Giorgio Dec 12 '14 at 12:07
  • @Giorgio: sure you can write some programs _in a way that allows such a verification_, but that really restricts you quite a lot. In a big program, you'll almost always need to exploit Turing completeness somewhere. — Yes, in practise you specify, code and "verify" simultaneously, but that verification is often enough heuristic (based on e.g. unit tests, not real proofs). – leftaroundabout Dec 12 '14 at 12:08
  • What do you mean by "exploiting Turing completeness"? – Giorgio Dec 12 '14 at 12:11
  • "... but that verification is often enough heuristic (based on e.g. unit tests, not real proofs)": No, if you read the notes carefully, it speaks about proving correctness by means of formal methods (e.g. using induction), it does not speak about unit tests. See also http://compcert.inria.fr/doc. – Giorgio Dec 12 '14 at 12:13
  • @Giorgio as in, as the problem gets bigger and bigger, you'll almost definitely encounter the need for a piece of code the computer won't be able to reason about. – John Dvorak Dec 12 '14 at 12:14
  • @JanDvorak: The main point is that you write the code in such a way that a program can reason about it. E.g. you iterate by means of recursion and provide a corresponding proof by induction. But yes, if you would like to write an arbitrary program and then verify it automatically, this is not possible in general. – Giorgio Dec 12 '14 at 12:15
  • @Giorgio: yes, for recursion you can beautifully apply inductive proofs. But that prooves only the _functional specification_, it doesn't tell you anything about how long it might take (the induction might basically reach to infinity). — That point was in fact pretty much the gist of my answer, wasn't it? – leftaroundabout Dec 12 '14 at 12:18
  • Not even Haskell can prove correctness of your code. The computer might be able to prove that a well-written function always converges (finishes without exception) and that the types agree, but it will never be able to tell you meant to subtract instead of adding two numbers. – John Dvorak Dec 12 '14 at 12:18
  • @JanDvorak: Well, if your specification is wrong, there is nothing you can do. When you prove correctness you always prove it wrt to a specification that you assume to be correct. – Giorgio Dec 12 '14 at 12:20
  • @JanDvorak: well, if you swap a plus for minus, a dependently-typed language _can_ still proove whether it's "correct"... only, "correct" might then refer to a different theorem. – leftaroundabout Dec 12 '14 at 12:24
  • @leftaroundabout: Can you give me an example of a practical problem for which you need to use a recursive function (or a while loop), and you are not sure if it terminates or not? – Giorgio Dec 12 '14 at 12:25
  • @leftaroundabout I'm not entirely sure what kind of correctness this would be. Is this an official usage of the term "correct"? – John Dvorak Dec 12 '14 at 12:26
  • @Giorgio: easy, loop over all StackExchange post that can be found. Might terminate if the loop body is faster than the next poster, or might loop forever as long as new posts keep coming in faster than you can process them. – leftaroundabout Dec 12 '14 at 12:27
  • @leftaroundabout: What is the termination condition of such a loop? – Giorgio Dec 12 '14 at 12:29
  • @JanDvorak: it's an official usage of the term "proven". You make some mathematical statement in form of a type signature, and then the implementation [prooves](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) that the statement was correct. Mind, those statements generally don't look much like function signatures as you would find in real-world programs. – leftaroundabout Dec 12 '14 at 12:31
  • @Giorgio: the condition is "there is no new post on StackExchange I haven't processed yet". – leftaroundabout Dec 12 '14 at 12:32
  • @leftaroundabout: A type system can prove certain properties of a program, but they are not the only formal method that you can use to this purpose. Showing that type systems are not sufficient does not show that all formal methods are insufficient. – Giorgio Dec 12 '14 at 12:33
  • @leftaroundabout in other words all it does is to verify that the types match? – John Dvorak Dec 12 '14 at 12:34
  • @leftaroundabout: And what would be the purpose of such a program? – Giorgio Dec 12 '14 at 12:34
  • @Giorgio: dunno, but for sure similar loops can be found in _lots_ of production applications. – leftaroundabout Dec 12 '14 at 12:36
  • @JanDvorak: basically, yes. And as I said, in an "ordinary" static-types language, this is _not_ really sufficient to proove correctness. But in a dependently-typed language, you can in fact refine your types so they'll _only_ match in a thoroughly correct program. ("Thoroughly", of course, again ignoring real-world runtime and similar dirt.) – leftaroundabout Dec 12 '14 at 12:38
  • @leftaroundabout: If you do not know the exact purpose of a program you cannot write a precise specification of its expected behaviour. On the other hand, you need a precise specification to prove correctness. A program that loops forever might be correct wrt to a certain specification, think e.g. about a web server. – Giorgio Dec 12 '14 at 12:39
  • -1: "If you mean, an interactive program where you need to be certain that the real-time behaviour is exactly such and such under any given circumstances, then it's basically impossible to prove there aren't any bugs in it.": This is a very strong statement: can you provide some reference to some book or paper where this statement is defined precisely and is proved to be correct? As it is formulated in your answer, it is difficult to judge if it is correct or not. – Giorgio Dec 12 '14 at 12:49
  • @Giorgio: well, looping forever might be correct for some application. Then, of course, terminating would probably be a bug! Real-world applications have _of course_ specifications, often somewhat "fuzzy" ones, but obviously some behaviours are just wrong. I'm not going to boil up some particular example here, but IMO the point is really obvious given the complexity of many real-time systems, in particular when concurrency is involved, and many parts depend on something to be ready at a given time. Such a system can break in all kinds of ways, simply because some function takes a bit too long. – leftaroundabout Dec 12 '14 at 12:55
  • (Again, there are concepts like [STM](https://en.wikipedia.org/wiki/Software_transactional_memory) which allow some proofs even in concurrent systems, but that also has its limits – any setting in which you want to proove something needs a well-bounded scope; the actual physical world simply is not a well-defined mathematical setting.) – leftaroundabout Dec 12 '14 at 12:57