35

What are the categories of cyclomatic complexity? For example:

1-5: easy to maintain
6-10: difficult
11-15: very difficult
20+: approaching impossible

For years now, I've gone with the assumption that 10 was the limit. And anything beyond that is bad. I'm analyzing a solution, and I'm trying to make a determination of the quality of the code. Certainly cyclomatic complexity isn't the only measurement, but it can help. There are methods with a cyclomatic complexity of 200+. I know that's terrible, but I'm curious to know about the lower ranges, like in my example above.

I found this:

The aforementioned reference values from Carnegie Mellon define four rough ranges for cyclomatic complexity values:

  • methods between 1 and 10 are considered simple and easy to understand
  • values between 10 and 20 indicate more complex code, which may still be comprehensible; however testing becomes more difficult due to the greater number of possible branches the code can take
  • values of 20 and above are typical of code with a very large number of potential execution paths and can only be fully grasped and tested with great difficulty and effort
  • methods going even higher, e.g. > 50, are certainly unmaintainable

When running code metrics for a solution, the results show green for anything below 25. I disagree with this, but I was hoping to get other input.

Is there a generally accepted range list for cyclomatic complexity?

David Harkness
  • 573
  • 1
  • 4
  • 13
Bob Horn
  • 2,327
  • 3
  • 20
  • 26
  • 2
    You found data from the Software Engineering Institute, an organization that is recognized as a leader in software engineering. I don't understand what your question is - you found a range list for cyclomatic complexity. What else are you looking for? – Thomas Owens Apr 05 '13 at 19:14
  • 1
    I've seen various ranges; that was just one example. And MS shows "green" for anything under 25. I was wondering if there was *one* accepted range list. Perhaps I've found it then. – Bob Horn Apr 05 '13 at 19:16
  • 1
    I agree with @ThomasOwens, but I am glad you asked this question. I upvoted it as both a question and an answer. – Evorlor Mar 13 '15 at 18:49
  • 1
    In the 2nd edition of Steve McConnell's Code Complete he recommends that a cyclomatic complexity from 0 to 5 is typically fine, but you should be aware if the complexity starts to get in the 6 to 10 range. He further explains that anything over a complexity of 10 you should strongly consider refactoring your code. – GibboK Dec 04 '15 at 08:20

2 Answers2

23

I suppose it depends on the capabilities of your programming staff, and in no small part on your sensibilities as a manager.

Some programmers are staunch advocates of TDD, and will not write any code without writing a unit test first. Other programmers are perfectly capable of creating perfectly good, bug free programs without writing a single unit test. The level of cyclomatic complexity that each group can tolerate is almost certainly going to vary substantially.

It's a subjective metric; evaluate the setting on your Code Metrics solution, and adjust it to a sweet spot that you feel comfortable with that gives you sensible results.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • 3
    Agreed, furthermore it depends on what is the cause of the complexity. A big switch statement that calls other functions, as part of a state machine or something similar, can have a very high complexity, despite possibly being virtually trivial to understand. – whatsisname Apr 05 '13 at 19:44
  • 1
    Aren't big switch statements typically an indication of a lack of OOP principles, such as polymorphism? State machines can be implemented in elegant ways, with OOP or design patterns. No? – Bob Horn Apr 05 '13 at 20:19
  • 3
    For some problems that 'elegance' is useful, for others it just makes things more confusing. There is no silver bullet. – whatsisname Apr 05 '13 at 23:03
  • @BobHorn But that would not reduce the cyclomatic complexity. Even if you replace the switch with either a class member function, which invokes different behavior depending on polymorphism (typical C++ solution), or if you would replace the switch with a function pointer jump table of some kind (typical C solution), etc etc, you still have the very same number of execution paths. So there is no obvious relation between cyclomatic complexity and OO design. –  Jan 28 '14 at 15:23
  • @Lundin You still have the same number of execution paths, but not all in one method. I would think polymorphism (instead of a switch) would make that one method easier to understand. – Bob Horn Jan 28 '14 at 15:26
  • @BobHorn There is no telling. Suppose your base class needs to spawn 100 different kinds of offspring just to move the complexity away from the switch statement. We would only have moved the complexity elsewhere. Readability and maintainability may get better or worse from that, I think it depends on the specific case. The only thing we can be certain of is that we spawn a big virtual method table, which likely makes the program less efficient. –  Jan 28 '14 at 15:35
  • 4
    -1 For "Other programmers are perfectly capable of creating perfectly good, bug free programs without writing a single unit test." You cannot know its bug free if you have not tested it. – Sebastien Dec 20 '16 at 19:38
  • 1
    @Sebastien: The absence of unit tests doesn't mean that it's not tested. And yes, if you're good enough, it's absolutely possible to write bug free code with no tests or a rudimentary smoke test. Admittedly, those people are a rare breed. – Robert Harvey Dec 20 '16 at 19:39
  • 1
    +1 For "Other programmers are perfectly capable of creating perfectly good, bug free programs without writing a single unit test." Because it's a perfectly reasonable truism. – Works for a Living Jan 23 '18 at 02:51
  • Please, oh please, [start studying Software Testing at least from google](https://www.google.com.br/search?q=software+testing) if you are Sebastien or agree with him. Unit test is not the only tool or strategy, neither silver bullet (how many times people need to be reminded that there not exist silver bullets?). As matter of fact I'd rather choose an good open-minded dev currently not using ie TDD than a [HDD](https://news.ycombinator.com/item?id=14171629) who fails to manage boundary/corner cases. – Andre Figueiredo Feb 20 '18 at 20:14
14

There are no predefined categories and no categorization would be possible for several reasons:

  1. Some refactoring techniques just move the complexity from one point to another (not from your code to the framework or a well-tested external library, but from one location to another of the codebase). It helps reducing cyclomatic complexity and helps convincing your boss (or any person who loves presentations with constantly increasing graphics) that you spent your time making something great, but the code stays as bad as it was previously.

  2. At the opposite, sometimes, when you refactor a project by applying some design and programming patterns, cyclomatic complexity can become worse, while the refactored code is expected to be clear: developers know programming patterns (at least they are expected to know them), so it simplifies the code for them, but cyclomatic complexity doesn't take this in account.

  3. Some other non-refactoring techniques don't affect the cyclomatic complexity at all, while severely decreasing the complexity of a code for developers. Examples: adding relevant comments or documentation. "Modernizing" the code by using syntactic sugar.

  4. There are simply cases where cyclomatic complexity is irrelevant. I like the example given by whatsisname in his comment: some large switch statements can be extremely clear and rewriting them in a more OOPy way would not be very useful (and would complicate the understanding of the code by beginners). At the same time, those statements are a disaster, cyclomatic complexity-wise.

  5. As Robert Harvey already said above, it depends on the team itself.

In practice, I've seen source code which had good cyclomatic complexity, but which was terrible. At the same time, I've seen code with high cyclomatic complexity, but I hadn't too much pain understanding it.

It's just that there is no and couldn't be any tool which would indicate, flawlessly, how good or bad is a given piece of code or how easy is it to maintain. As you can't program an application which will tell that a given painting is a masterpiece, and that another one should be thrown away, because it has no artistic value.

There are metrics which are broken by design (like LOC or the number of comments per file), and there are metrics which can give some raw hints (like the number of bugs or the cyclomatic complexity). In all cases, those are just hints, and should be used with caution.

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • 2
    +1 I agree with everything said. Cyclomatic complexity or LOC are just metrics that get handed to you by static code analysis. Static analysers are great tools, but they lack common sense. These metrics need to be processed by a human brain, preferably one belonging to an experienced programmer. Only then can you tell if a particular piece of software is needlessly complex. –  Jan 28 '14 at 15:30