Skip to main content

How To Work a Sigmoid

Software Development in Really Big Steps
  1. How To Work a Sigmoid
  2. How To Work a Sigmoid - Part Two

I've written about my use of FogBugz, driven by their great time tracking and estimation features. Using these, I've come across what I think is probably common and should be a goal for estimating the time of a project.

There are two estimations of a project. When you start, you can make some wild guess, pulled from the ether, weeks or months ahead of when you think it will be complete. This is the number that is notoriously and unequivically wrong. This kind of prediction is simply an invitation to make a smart person look dumb, since so few of us realize that he never was able to make that estimate. The larger the project, the greater the exponent on your chances of being able to make this estimate. This is not new to any of us.

The second estimate is the running estimate, compiled from the tasks the project has been broken down into. Now, the pro of this running estimate is that it is bound to be more accurate than the wild guess you started with, especiallly if computed with some of the fancy number working FogBugz does to account for how good different developers actually are at estimating their time. However, to every pro is a con and this one has a big one: the running estimate, although more accurate, is incomplete. You can only estmate for the tasks you've broken the project into and that is a fluxing set of tasks. As you develop you break larger tasks into smaller ones, learn new things you need to do, change requirements, find bugs in the work you've done already or the dependencies you use, and continue to iron out the design. This is even more true if you use agile techniques, so you didn't design a lot of upfront, but design on the go. Not to say this isn't a good thing, but it is a thing to be aware of.

The project starts at 0.0 and it ends at 1.0. Your guess is somewhere below or above 1.0, but mathematically cannot be equal to it (because, you can't guess!). As you accelerate your collecting of tasks to do the running estimate begins to increase toward 1.0 very quickly, until you start to level out and complete more tasks than you create. We work on the running estimate of a sigmoid curve, winding up from nothing and leveling off at the best real estimate that can be given with the real data at hand. Now, I grabbed this image from some place and I didn't add the flat line that represents your initial guess. This is both because I didn't have the time and because that guess is completely useless.

Great, so we work a sigmoid. So what?

The world is flooded with useless information and I don't want to contribute, so this is the part where I make my revelation somewhat useful. At least, theoretically. A good estimation system, like the Evidenced-Based Scheduling from Fog Creek, is really great. But, what if we included estimation of estimations? Oh, that sounds recursive, for sure. Suppose that, in addition to computing the weighted estimates and the running estimates of release after compiling all the information that can be taken usefully into account, we also track the running estimates as they change over time. If we graph these, I suspect they would roughly follow the curve of the sigmoid. If we find this or any other pattern to be true, we can estimate the estimations. The further along a project goes, we can estimate the future of the curve and make moderately intelligent guesses about where the estimation will go in the future. Weighting for how different teams and individual developers estimate, the system can train itself for accuracy.

I'm already into my current FogBugz tracked project, but my next will be setup to grab the estimate data periodically and I'm just itching to test out my theories. We can't predict when a project is going to be complete, if it ever is, but we can damn sure do better than pulling numbers out of the air.

Comments

Popular posts from this blog

Respect and Code Reviews

Code Reviews in a development team only function best, or possible at all, when everyone approaches them with respect. That’s something I’ve usually taken for granted because I’ve had the opportunity to work with amazing developers who shine not just in their technical skills but in their interpersonal skills on a team. That isn’t always the case, so I’m going to put into words something that often exists just in assumptions.
You have to respect your code. This is first only because the nature and intent of code reviews are to safeguard the quality of your code, so even having code reviews demonstrates a baseline of respect for that code. But, maybe not everyone on the team has the same level of respect or entered a team with existing review traditions that they aren’t acquainted with.
There can be culture shock when you enter a team that’s really heavy on code reviews, but also if you enter a team or interact with a colleague who doesn’t share that level of respect for the process or…

CARDIAC: The Cardboard Computer

I am just so excited about this.


CARDIAC. The Cardboard Computer. How cool is that? This piece of history is amazing and better than that: it is extremely accessible. This fantastic design was built in 1969 by David Hagelbarger at Bell Labs to explain what computers were to those who would otherwise have no exposure to them. Miraculously, the CARDIAC (CARDboard Interactive Aid to Computation) was able to actually function as a slow and rudimentary computer. 
One of the most fascinating aspects of this gem is that at the time of its publication the scope it was able to demonstrate was actually useful in explaining what a computer was. Could you imagine trying to explain computers today with anything close to the CARDIAC?

It had 100 memory locations and only ten instructions. The memory held signed 3-digit numbers (-999 through 999) and instructions could be encoded such that the first digit was the instruction and the second two digits were the address of memory to operate on. The only re…

How To Care If BSD, MIT, or GPL Licenses Are Used

The two recent posts about some individuals' choice of GPL versus others' preference for BSD and MIT style licensing has caused a lot of debate and response. I've seen everything as an interesting combination of very important topics being taken far too seriously and far too personally. All involved need to take a few steps back.

For the uninitiated and as a clarifier for the initiated, we're dealing with (basically) three categories of licensing when someone releases software (and/or its code):
Closed Source. Easiest to explain, because you just get nothing.GPL. If you get the software, you get the source code, you get to change it, and anything you combine it with must be under the same terms.MIT and BSD. If you get the software, you might get the source code, you get to change it, and you have no obligations about anything else you combine it with.The situation gets stickier when we look at those combinations and the transitions between them.

Use GPL code with Closed S…