*not*very good at estimating how many man hours it will take to complete the entire project life cycle. So, with a background in psychology, experience with statistics for social sciences, and a knack for inquisitive observation, I set out to generate a formula for providing accurate and timely estimates by isolating the variable that I am best at estimating . . . development hours.

I started by observing my work with one of our larger clients because, after all, they were impetus for my little side-project anyhow. I took notes on all things client related. How much time did we spend talking to the client? How many people were in conference calls? How much time did it usually take to deploy in the client's environment? How much time did it take to test an application before deployment? I gathered as much data as I could and once I felt like I had a reasonable sample, I looked at the trends in the data. I noticed that four separate components stood out: coding time, testing time, deployment time, conference calls. I've renamed these factors, which I call the 4 Ds, and they are the basis for my formula: develop, debug, deploy, and discuss.

So, my records showed that for this particular client, if I spent 10 hours coding, I would have to spend 15 hours testing. If I spent 5 hours coding, I would spend 7.5 hours testing. I generally do most of my testing as I go along, so it was a little difficult to arrive at these numbers, but I found that I reliably spend 1.5 times as much time testing as I do programming. Thus, for every n hours I spend coding, I will spend 1.5n hours testing. I also noted that we spent an average of 1 hour deploying our applications to this client regardless of the amount of time spent programming so I add 1 hour for deployment regardless of the project size.

Finally, conference calls. Conference calls were kind of difficult to factor out of the raw data. It turns out that for this client, we always have at least 1 conference call and the average conference call lasts about 45 minutes. I also noted that in addition to the one conference call we have for every project, there's also one conference call for every six hours of development. Thus, to figure out how much time we spent discussing a project, I calculated (⌈n/6⌉ + 1) and multiplied that by 45 minutes or 3/4 hour. The problem was that this only lined up for projects with low complexity.

I reviewed the data again to find the source for the discrepancy. The time spent in conference calls after factoring out development time had a particularly high standard deviation. I decided there must be another factor. I thought about the many meetings we've had discussing projects for this client and I realized what was causing the within measure variability; the more hours we spent developing, the more people had to be involved with the conference call. I took that factor out and identified an average of 1.5 people per call and the estimates became a little more accurate.

The problem was that there was still some variability, not only in conference calls, but throughout the entire formula as well. To keep a long explanation from getting longer, I found that the overall variability wasn't a factor within the formula itself per se, rather a complexity factor that generally increased all of the estimates throughout the formula. Thus, my entire formula needed to factor in complexity anywhere the formula referenced development time.

My finished formula was:

h ≈ develop + debug + deploy + discuss

h ≈ cn + 1.5cn + 1 + 3 (1.5 * (⌈cn / 6⌉ + 1)) / 4

After simplifying:

h ≈ 2.6875cn + 2.125

Now, any time I get a new project for this client, I estimate the number of hours and I guess the unfortunately subjective complexity. I apply the above formula and the estimates are accurate within a 95% confidence interval when compared with actual times.

Great post. Thanks for the info.

ReplyDelete