1

I've noted that fibonacci sequence is quite popular in planning poker, but is it a reason for that particular sequence? Wouldn't for example powers of 2 work equally well?

Both sequences are more or less exponential while fibonacci uses a factor of the golden ratio (approximately 1.6) so fibonacci has somewhat higher resolution and would allow to express more accurate(*) estimates.

Is there for example any evidence that people tend to be able to estimate accurate enough to motivate the higher resolution? And if there is wouldn't a even finer scale be motivated?

This question is not why one uses an exponential scale, but rather why choose the base of the golden ratio (which corresponds to the fibbonacci sequence). I think that the resolution of the scale should be in line with the estimation error you have. Therefore I don't think this is the same as What's the best explanation of what Story Points are?

(*) Here "accurate" means low level of estimation error. A quality that can be compared (an estimate is more or less accurate).

skyking
  • 144
  • 7
  • Could you please give an example of how you'd use fibonacci sequence in planning poker? I feel like I'm missing something here. – Neil May 28 '19 at 07:10
  • 1
    @Neil, the "standard" set of numbers in planning poker are 0, 1/2, 1, 2, 3, 5, 8, 13, 20, 40, 100, ∞. The Fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55... That planning poker sequence is just a modified version of the Fibonacci sequence. – David Arno May 28 '19 at 07:25
  • @Neil Fibonacci is only one progression used in Poker Planning, there are others tho. Anyways Poker Planning adopted Fibonacci sequence to estimate "uncertainty" with a higher degree of accuracy than the original method (wideband). – Laiv May 28 '19 at 07:25
  • 1
    "*estimate accurate enough*". This is an oxymoron. If you can be accurate, then you aren't estimating. You know how long it'll take, rather than estimating it. The purpose of schemes like planning poker is that it handles the uncertainty of estimation well: the bigger the estimate, the more uncertainty and thus the bigger gaps between numbers. Any exponential sequence will do for this. The commonly used sequence has just become the de facto "standard" due to popularity. – David Arno May 28 '19 at 07:29
  • *allow to express more accurate estimates.* Trying to do that is a mistake, for the reasons @DavidArno already statet: for small tasks, you can be pretty accurate because there are simply fewer mistakes and wrong assumptions to be made. For these same reasons you are also doomed to be inaccurate when estimating larger tasks. The Fibonacci sequence **perfectly** captures this; that is why it is endorsed. – marstato May 28 '19 at 07:32
  • @DavidArno I thought it was clear what I meant by "accurate" here as I started by comparing accuracy (also the adverb "enough" would be a clue). If you thought I meant accurate as in dead-on then you can't compare it either you are or you ain't. Instead I mean a low level of estimation error: an estimate with +/-30% error is more accurate than one with +100/-50%. – skyking May 28 '19 at 10:29
  • @marstato I disagree. If you're actually were able to tell the amount of time the work would do you would be stupid not to use that ability. It would to deliberately do a worse job than you could have done. – skyking May 28 '19 at 10:34
  • 1
    @skyking in an agile setting you are not estimating the *time* a task will take, you are estimating how **complex** it is. *Duration is of no relevance*. And estimation-error rate increases with complexity. Also, agile suggests that you break a story down into smaller parts if you have a big uncertainty. Something with +100/-50% uncertainty should never make it into sprint commitment. – marstato May 28 '19 at 13:47
  • 1
    Those of you who are saying that accuracy plays no part in the estimation process, or that time is not a factor, are simply wrong. Accuracy in estimation is expressed by your degree of certainty (or uncertainty) in your estimate; it can be improved by dividing up your work into smaller incremental steps and estimating those separately. It's *entirely* about time; otherwise, what is the point of an estimate? The time estimate is found by multiplying your story points by your team's velocity. – Robert Harvey May 28 '19 at 14:47
  • @marstato Tell that to the project manager/customer when you fail to deliver on time. They will probably be very happy knowing that time isn't relevant. Not! – skyking May 28 '19 at 15:31
  • 1
    @skyking If a team fails to deliver the Sprint goal there is no excuse. All you can do then is realize what went wrong and do that better next time. And yes, I've worked in a project where we had a good and constant velocity for almost the entire time and still failed to deliver all the requested features **by the deadline set by the customer**. And you know who's fault it was? That of management. **They** committed to a deadline not knowing whether the capacity in the team sufficed. Luckily, nobody blamed the team. – marstato May 28 '19 at 16:37
  • @RobertHarvey I strongly disagree. In an agile setting, the **time is fixed beforehand** (Sprint duration). The point of sprint planning is to **find a scope suitable for the given time frame**. The only job of a story point value is to aid with that. Story points != Time because some tasks can be sped up by parallelozing while others can't be. – marstato May 28 '19 at 16:44
  • @marstato: If your time is fixed beforehand, then you're still going to need to know how long things take in order to find a suitable scope. – Robert Harvey May 28 '19 at 17:27
  • @marstato Don't you realize it's a bit contradictory to say that time is irrelevant and then emphasize statements about time? The fact that you have example where you estimated well and somebody else screwed the plan anyway doesn't mean that time isn't relevant. If time weren't relevant there wouldn't be any point in trying to estimate tasks. In the end these estimation is all about time (even though you don't like it and try to hide from the facts doesn't alter that). – skyking May 29 '19 at 05:41
  • @gnat I think it was a mistake to mark this as a duplicate. – skyking May 29 '19 at 05:44
  • 1
    @skyking Just read Kain0_0s answer, it explains my point of view far better than I can (let alone in a comment). Also, nobody screwed the plan. The plan was flawed to begin with. As is trying to estimate something that you simply cannot estimate. We have a far better grasp of a tasks complexity than of it's duration. Say e.g. for some task you need to write a DB query. With a big dataset that query's runtime cannot really be predicted (read: worst case: hours); but the developers have little uncertainty about how that query will have to look when done.. if you really wanted to estimate the run – marstato May 29 '19 at 07:00
  • ... runtime, you'd first have to Wirte the query (do the task) and then look at the execution plan. You now have to estimate how long the estimation will take. Voila, you just shifted the problem. Also, you probably can work on sth else while that query is running. Or you can break that task down in two steps, slowing your team down even more. – marstato May 29 '19 at 07:05

2 Answers2

1

Production

What are you estimating? The time to manufacture something?

This has been solved for decades in Software Development.

Here is the process:

  1. Copy the file/folder to the Clients Machine.

Done.

Estimated time 5 minutes (give or take 2 weeks depending on delivery mechanism and size).

Design

Given that manufacture is so simple, you are probably interested in estimating the time for designing.

Design can be split into two parts.

  1. Research
  2. Arrangement

Generally speaking Arrangement is the easier part, given a client that is willing to co-operate.

Research

Simply go out and find out that which is not known, then spend time to know it.

This is frankly impossible to estimate. As an observation take a look at predictions for: Fusion Power, Cancer Cures, Mars Colonisation, Self-programming Computers, etc...

When there exists something unknown, you simply cannot guess its size. The only proxy you have are any past experiences in the area.

Those past experiences are probably skewed representations themselves. If they shared a lot in common with your goal then you are not conducting Research but Arrangement.

Arrangement

Given a box full of previous designs re-arrange the ideas/components to produce what was asked for.

The benefit here is that the components/ideas have already been designed. If work has to occur to create/alter them, there already exists a well established methodology for their construction/alteration. (If not then this is Research.)

Given that there is a well known methodology, it is estimable based on the time taken by previous endeavours to produce/alter those components in the same way.

Estimating

What this means is that design is a balancing act of Researching the new knowledge, and Arranging that newly acquired knowledge with an older box of designs.

The problem is that most software projects are not in the later category very often. Which means a lack of standardised methods and estimations.

Those that are in this later category have generally speaking already been turned into commodity components. The estimated time to obtain these is approximately 5 minutes. (Obviously not for the first usage, but as the component is reused within the team).

Poker Face

What that leaves is the unknown.

The unknown, is by definition unknown.

So obtaining an accurate (or even proximally accurate) estimation is simply impossible.

However just because the unknown does not blink, does not mean that we cannot estimate our own lack of knowledge with regard to what is known.

Logarithmic Machines

Humans are logarithmic machines by nature, which means we are really sensitive to small differences at close to known, but crappy at handling similar discernment on larger scales.

What this means is that when the task, and the unknown element are small humans are great are judging it. These tasks are essentially arrangement.

Unfortunately a task which is Huge, or a task that is very uncertain, do not fit into comprehension (by definition). To make it fit humans abstract it down. This means that what appears to be a little increase in size/uncertainty translates into a huge difference in actual effort needed. In short Research.

This is why the series needs to be exponential. Its simply the most practical crutch to translate human logarithmic judgement to something vaguely linear.

As to which series to use. That is a mater of taste, and team preference. Which crutch best fits the teams own logarithmic distribution.

Kain0_0
  • 15,888
  • 16
  • 37
  • I'm fascinated at the sheer amount of hand-waving here. As software developers, why can't we simply admit that we just suck at estimation? – Robert Harvey May 28 '19 at 14:57
  • @RobertHarvey Where is the hand waving? I'm curious to see how bad the mud map I drew is. – Kain0_0 May 29 '19 at 00:09
  • 1
    @RobertHarvey, as humans, we suck at estimation of new things. And in software, almost everything we do is a new thing. So why are developers so keen on self flagellation? And I see very little hand waving in this answer; I thought it a brilliant summary of the challenges for anyone attempting to estimate software. – David Arno May 29 '19 at 08:13
0

Uncertainty will increase with the number of steps to take and with the length of the steps. So you will likely increasingly divert from your estimate as the task to take on gets bigger, your estimates should become more coarse. We all get that.

The trajectory of the coarseness is debatable. We at work recently suggested to drop the cards and switch to "quick", "medium" and "big" because it is really hardly ever more accurate than that anyway. It is just all to give people a sense of control which is hardly ever justified. Using a Fibonacci range adds to the feeling you are doing something that makes sense (science! math!). You can justify it with something like my first paragraph and everybody will be happy.

It is a continuous path of lulling each other into a sense of control. If you find any task takes (a lot) longer than anticipated, you just spin off a new issue and call the initial issue done. And your velocity adds up nicely. Call it a pacifier, call it scrum, call it something to keep you going when things look desperate. The math behind the card does not really mean that much, it is all just a way to keep in touch with each others state of mind about the work to be done.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57