How We Plan Iterations
At AlphaSights, we more or less follow the Scrum agile development framework. I recommend Scrum and XP from the Trenches by Henrik Kniberg for an introduction. However, we have added a couple of extra steps for many of our teams to ensure that iteration planning is as clear and efficient as possible. As a Product Analyst, my job revolves around the prep work required before our team can begin developing a feature. Preparation for a typical two week iteration usually looks like this:
Preparing the “Cards”
We use Trello to organize our development backlog. As feature requests, bugs, and other to-dos come in, we first consider whether to add them to the board at all. Is this technically feasible? Is it a pervasive need that deserves the Software Engineering team’s time? Is it aligned with our larger vision? If the idea passes these initial sanity checks, we add it to the backlog as a card. If it doesn’t, we follow up with the idea originator to give them a reason why. Communicating feature decisions to users is easy because I currently work on our internal app and know them all personally. However, it's definitely harder to say no to someone you work with on a daily basis.
Once a feature request has made it into the backlog, we add a simple title and write a user story to explain who needs it, why, and what needs to happen to fulfill their need. There are plenty of books and articles out there that detail how to write an effective user story, but I try to follow this formula:
As a ______, I need to/want to ______ so that I can ______.
This oversimplifies what we need to consider to make a story complete, but it’s a good starting point. I’ve recently been adding even more detail to my stories, along with questions that a developer would have if he or she took on the card. It’s impossible to anticipate everything, but I try my best to cover most of what the feature should (and should not) do. If applicable, I also attach screenshots, mockups, links to relevant GitHub pull requests, and references to where to look in the codebase.
Assigning Business Value
About a week before iteration planning, we assign business value to all new cards. This session includes our Technical Director, the team lead, and me. I have my own idea of feature importance, but given my limited technical background, this is a great time to to assess technical feasibility, brainstorm potential solutions, and decide if a card even deserves to make it into an iteration. The discussions in this meeting always reveal new questions I need to answer, stakeholders I should follow up with, and other missing upfront work. Having these conversations one week before iteration planning makes planning itself shorter, and gives us time to make sure the cards are complete before we present them to everyone.
We use a scale of 0-100 for business value. Typically, we rank things that are urgent, such as bugs or other blockers, at 100 so they definitely make it into the next iteration. For everything else, 80-90 will generally get in as well, while 60-70 won’t come up for a couple months. The lowest we go is usually around 40, a sign that a feature is a "nice to have." While they still deserve a place in the backlog and would be useful to implement, these types of tasks aren't pressing.
To come up with the number, we each think of an estimate in our heads, report our numbers (and promise that we aren’t changing them based on what others said), and provide reasoning for the ranking if necessary. There are online resources such as Mountain Goat’s planning poker that can help with this, but we haven’t found one that we like yet. We are usually pretty aligned in our estimates, but sometimes one of us will report a vastly higher or lower number. This means that we haven’t fully discussed the scope of the story and the problem the card is trying to solve. Once we reach agreement, we average out the remaining difference, assign that value to the card, and place it in the backlog accordingly.
Making the Implementation Reference
The next day, we reconvene to write an implementation reference for the cards that will likely make it into the next iteration. This is not a concrete to-do list, but rather a rough outline of the steps a developer may take and the places in the code they may look to implement the feature. It is entirely possible that the person who works on the card will end up taking a completely different approach. The list is most helpful as a place to start when a developer first begins working on a new card.
The implementation reference’s level of detail varies from card to card. Sometimes, it will be an explicit guide to completing the feature, and other times it may just include a couple places in the codebase to examine to begin finding a solution. There are even some cards that only have one to-do (“investigate”) because the first step is to figure out how everything works currently. In this case, it is impossible to write a rough to-do list without doing the assessment yourself.
By the time we reach the iteration planning meeting, we already have detailed cards arranged in order of business priority with an implementation reference to guide a time estimation. All that’s left is estimating the amount of time each card will take with the entire team, and determining how many of the cards we can fit into the upcoming iteration.
We use the same estimation technique for business value and time: everyone thinks of a number in their head. If there are large discrepancies in the amount of time (in days) someone thinks the card will take, we talk through the differences. Usually, large variations come from someone either over or underestimating the scope of a card. Once there is consensus on how long the feature will take to develop, we assign that value to the card. When we’ve made it through the top part of the backlog, we’re left with a list of cards that look something like this:
[Business value] Brief description of feature [Time estimation]
The card description includes all the user stories and screenshots, along with the implementation reference.
At this point, we estimate the number of “days” the team will have available during the upcoming iteration. We start with the total number of days if everyone was at full capacity, and subtract out time for known distractions or upcoming days off. We then take off a percentage of the total to account for productivity trends in previous iterations.
At this point, all we have to do is move cards into the “Current Iteration” column until the total adds up to our capacity estimate. People then choose cards they would like to work on, and we have planned our iteration.
We end each iteration with a retrospective, during which we talk about what went well, blockers that arose, and other learnings. We also discuss things that we should change in the upcoming iterations to make the team more productive. We’re currently experimenting with team velocity tracking to help inform these sessions in the future. One person from the team takes notes during the retrospective, and posts the summary to our team’s Discourse board for everyone to see.
While we've taken some steps in the right direction, we're still investigating the best way to plan our iterations. The above process is effective now, but it likely won't stay relevant as our team grows. I'm currently reading 50 Quick Ideas to Improve Your User Stories by Gojko Adzic and David Evans, which has already given me many ideas about how we can continue to improve. Deciding how you plan and run iterations is highly dependent on your team, your product, and your company, and we're only beginning to figure out what works for us.