Self-adjusting WSJF

WSJF [wiz-jif] is a prioritization method aiming to maximize the return of investment in software development teams. (See Black Swan Farming for a great introduction.) However, it can keep certain tasks at the bottom of the list, and so they never get done. Which is why we have introduced a self-adjusting version.

WSJF assigns a score to each job that, originally, is the business value gained divided by the effort (job size) required to do it. The higher the score, the more the job should be prioritized:

score := value / job size

The expression is easy to understand if we think of it as the “value density” of a task: which job should I undertake that yields the maximum amount of value per unit of effort?

Often, the value of a job is estimated from the Cost of Delay, which combines urgency (time criticality) and business value. Some organizations may estimate business value, for example, by revenue enablement, risk reduction, customer retention, reducing development and technical costs, and so on.

A shortcoming of WSJF is that, as expected, low-value jobs almost never rise to the surface: in the day-to-day life of a business, with a steady stream of urgent tasks, non-urgent and/or mid-value jobs will never get done.

How do we fix this?

Notice that the time criticality of a job will change over time. Something that needs to be done for a customer event in 6 weeks’ time has low urgency, but it will have higher urgency 2 weeks from now. So will we need to keep re-scoring all jobs in the backlog?

Time as value

Not quite. There is a further modification to WSJF that can make this entirely automatic. Looking at WSJF in the context of infrastructure tasks I noticed that these are a bit removed from providing direct business value, so assessing revenue enablement, income, customer retention etc. was not a viable way to determine the value or cost of delay.

What was easy, though, is to say in roughly how much time we’d want to have the results. Is it a critical security issue? This needs fixing in 2 days. Does a major project depend on this? We have a week to have it done. Is it a nice-to-have to update the spam filtering on all mail servers? Let’s just make sure it’s done by the next quarter.

This way, all aspects of value are captured by a simple-to-assess number: a target time period. Actually, the value is the inverse of the time period (the shorter the time we have, the more valuable the job is), so the WSJF scoring becomes:

score := 1 / target time period / job size

Time is value!

And seeing value as time opens up an exciting possibility: we can calculate the remaining time we have to do something! This will gradually increase the value of a job as time passes, ranking it higher and higher in the WSJF system.

That is, instead of determining in how much time we want something in place, we determine by when we want it. Once we have a target date, the scoring becomes:

score := 1 / (target date - now) / job size

But there is a further step to do: we want to avoid dividing by zero if today is the target date, or negative scores when we are past the deadline. So we use

score := 1 / max(target date - now, ε) / job size

where ε is a small number.

Do we need to write a report on security measures by end of Q2? It’s not urgent in January if there are business-critical jobs in the queue. But come 1 June, the system automatically tells us to finally get on with it.

There you have it - a self-adjusting WSJF score. It is easy to implement this in e.g. Atlassian’s Jira using custom fields for the target date (datestamp) and job size (number, for example, developer days), and an automated rule that calculates the WSJF score and puts it in a third custom field. This field can then be directly used to list tasks in order, and simplify the planning process.

Popular Posts