4

Should code quality metric evaluation tools like Sonar be integrated with IDE for running local analysis or should they be a part of the build process (like integrated with maven) for continuous inspection or should it be a combination of both? How can we leverage the power of such tools to the maximum extent possible?

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
Geek
  • 5,107
  • 8
  • 40
  • 58
  • 3
    @downvoter leaving a comment while down voting would help. – Geek Mar 20 '13 at 14:11
  • Downvote. In order to "leverage the power of such tools to the maximum extent possible", you first have to know WHAT THE TOOLS MEASURE, and whether that measurement is worth anything. The ugly fact is that just about every software "quality" metric ever proposed/discovered, when applied to real code, has been shown to be strongly to very strongly correlated with raw Source Lines Of Code (raw SLOC). At that point, anything you could learn by measuring that metric, you can learn by counting carriage returns, making the effort of taking the fancy metrics a waste of time, money, and electrons. – John R. Strohm Mar 21 '13 at 04:23
  • @JohnR.Strohm Sonar measures the code metrics from seven different axes. All these measures are orthogonal. They do not add up.So it is not barely counting the raw lines of code. You need to be able to use it to see its real power. – Geek Mar 21 '13 at 05:26
  • I conjecture that you do not understand the meaning of "strongly correlated". If two different measurements are shown to be strongly correlated, then measuring BOTH of them is a waste of time: you can measure one, and you know what the other one will tell you. Since all of the fancy metrics (McCabe, Halstead, ...) are correlated with LOC, you can just measure LOC and have the same actual value. What I'm saying is this: to get REAL benefit, you have to know whether your measurement is worth doing. Code metrics, beyond raw LOC, generally aren't. – John R. Strohm Mar 22 '13 at 02:08

2 Answers2

2

I would say that it should be used in both places, if possible. Ideally, your analysis tool would catch most of the code problems on a developer's workstation when the developer runs it locally and keep the code in source control cleaner. And having it run nightly in a batch would find any problems that do get checked in, and they can be focal points of the next code review - assuming that code that is flagged as problematic but still gets checked in is more complex to resolve and should be discussed before changing it from what works to what meets standards.

FrustratedWithFormsDesigner
  • 46,105
  • 7
  • 126
  • 176
2

You need both, as they address linked but distinct situations : The local run is all about making quick improvements when coding or immediately after. It also offer the comfort of your IDE (it is the best place to browse for code, as you have everything nearby). In addition, you can push developers to run it before committing, to have a first "quality firewall" in place.

The dashboard is all about the team. The fact that it is public did help a lot in my experience - no on want to be responsible of the "bad" project, so it did push to some kind of virtuous circle. In addition, it is very convenient to be able to arrive, open a browser and take a look "how did we this week" ? The dashboard also have the historical view, which is really important (more often than not, the trend is more important that the result - you want to be improving, that is what matter).

As you noted, Sonar actually allows both usage, which is nice (you want the same rule in the IDE and in the build).

Martin
  • 872
  • 7
  • 7