3

What type of SLOC do you take into account for estimating web application development projects with COCOMO II?

For instance, suppose you have to estimate a web application project that will probably result in those SLOC counts, based on what you know about similar previous projects:

  • Python (back-end REST API) - 10'000 SLOC
  • JavaScript (front-end single page app) - 12'000 SLOC
  • HandleBars (templating) - 8'000 SLOC
  • SASS (css preprocessing) - 3'000 SLOC
  • JSON/XML/YAML (data and configuration) - 1'000 SLOC
  • YAML (server deployment recipes and tasks) - 1'000 SLOC

Would you only enter Python and JS SLOC or would you take everything into account?

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
Jivan
  • 315
  • 1
  • 8
  • 1
    As a general rule, I suggest reading [Software Estimation: Demystifying the Black Art](http://www.amazon.com/Software-Estimation-Demystifying-Developer-Practices/dp/0735605351) which has a significant amount of information on many estimation techniques and their strengths and weaknesses. Estimation is not a "one and done" but rather builds on history of estimations to help refine future ones. –  Aug 15 '15 at 20:23
  • @MichaelT I'm currently reading this book and this is what led me to such a question :) thanks for the helpful answer and comment – Jivan Aug 15 '15 at 20:28
  • 2
    You might want to consider using function points as the calculation in COCOMO instead of SLOC which would give you a more technology neutral / generic system to build on. COCOMO was designed in the days of giant C or Assembly projects where there was only one language being used. It is having difficulty in today's world of polyglot programming where it takes 10 lines to do something in one language and 100 to do it in another (but the 100 doesn't take any longer to write). –  Aug 15 '15 at 20:39
  • @MichaelT, "function points", last I looked, were not at all well-defined, and they are not at all amenable to post-construction counting. The reasons SLOC keeps winning these wars is that (a) SLOC are almost ridiculously easy to count, and (b) SLOC is very strongly correlated to everything else that has been proposed. – John R. Strohm Aug 17 '15 at 18:21
  • @JohnR.Strohm the old ones were not well defined at all and much more of a old ETL data processing mindset (number of fields on a record, number of inputs, number of outputs, amount of calculation). [More modern ones can be impossibly complex](https://en.wikipedia.org/wiki/IFPUG) (you need to hire a consultant to count them for you - they're known as "Certified Function Point Specialist"). The idea still remains - use something as a proxy for the complexity that you measure against instead which can be the source for the estimate rather than SLOC in a polyglot environment. –  Aug 17 '15 at 18:27
  • @MichaelT: If your chosen tool can only be used with the help of a dedicated high-dollar consultant, you probably chose the wrong tool. As an alternative, consider Detailed COCOMO, which still uses SLOC, but allows you, among other things, to partition your estimate by, say, implementation language, do individual estimates for "subsystems" in each language, then roll up a final total estimate. – John R. Strohm Aug 17 '15 at 19:07
  • @JohnR.Strohm You indeed could do that. However, one should endeavor to do all the calculations by hand then so that you can identify if it needs to be the compsite project SLOC or the module SLOC. In particular when you get into parts such as the [effort equation](http://www.softstarsystems.com/overview.htm) you get things like `2.94 * EAF * (KSLOC)^E` and if one calculates the effort for each language independently and then sum it, they will get a different answer than if they sum all the lines first. –  Aug 17 '15 at 19:11
  • @MichaelT: If you swing a hammer, and you don't know how to swing it, the odds are good that you're going to hit your thumb. Same thing goes with software estimates. – John R. Strohm Aug 17 '15 at 19:19
  • @JohnR.Strohm yep. Which is why Steve McConnell is less thrilled with Cocomo as an estimation tool - too many fiddly bits and knobs to adjust which makes getting an accurate historical record of "this much code/complexity went to that much time." When you can tweak the "use of software tools" or "Personnel Continuity" (for example) knobs, it holds very few things constant where historical trend and iteration can help refine the estimate. Still, if one is going to use COCOMO, its important to get one's head around all the numbers that go into it so you don't get garbage in garbage out. –  Aug 17 '15 at 19:25
  • @MichaelT: Can we agree that the same comment applies to Function Points, doubled and vulnerable? (Recall your comment about Function Points REQUIRING a high dollar consultant, which is where this discussion started.) – John R. Strohm Aug 17 '15 at 19:49
  • @JohnR.Strohm function points, yep agree that they are complex and impractical for many situations. I *have* seen a consulting shop that had an individual who was certified as such who was constantly part of their scope, sizing, and cost process (and they gave good, tight estimates that they hit). They do have utility, just not practicality in most situations. –  Aug 17 '15 at 20:18

1 Answers1

4

You include every line of code and configuration that took you time to write. It is doubtful that the 1 kSLOC of JSON and XML appeared out of thin air. If you fail to do this, you will underestimate the amount of effort to create the product.

On the other hand, it is likely the case that writing 1 kSLOC of JSON will take a different amount of time than writing 1 kSLOC of Python or Javascript. For that matter, it is also likely that 1 kSLOC of Python is different than 1 kSLOC of JavaScript.

You will need to keep track of how much time is spent generating the code for those other technologies and figure out what their SLOC to time conversion is. It is unlikely to be exactly the same as JavaScript or Python, but thats a starting spot for developing an estimate (just make sure you give enough uncertainty in the estimate to account for it).