Feedback on 360° Feedback Session

If you remove your job, you are promoted. (a classic lean quote)

In Management 3.0, Jurgen Appelo suggests doing full team 360° feedback sessions instead of more traditional manager-collaborator meetings.

"The cover of Management 3.0 book"

He argues in favor of this practice in order to :

  • obviously, get some feedback and improve
  • also give feedback to the manager so he too can improve
  • help the team to further self organize
  • practice everyone’s people skills
  • have more quality objective feedback than subjective manager feedback
  • free some manager time

Our Experience

We just gave it a try. As a fist experiment, we did it at just 3, all willing to try. Here is the ROTI (Return On Time Invested)

Grade(/5) Comment
5 Useful and healthy. It’s a way to stop grumbling. It’s also the occasion to say things that we often don’t.
5 I’m leaving the room with real improvement topics. It calms my emotions, it’s like “balm for the heart”
5 I’m getting out with great advises. I think it’s great for team spirit. It took 1h for just the 3 of us, I’m wondering how we’ll manage this if we are more ?

If you want to try it

A few last minute advises :

  • Don’t force it onto people, start with volunteers
  • There must be a safe and positive atmosphere in the team
  • This is an improvement exercices, and it should not be used as any kind of evaluation
  • Learn how to give feedback
    • Our company provides trainings on non violent communication and positive feedback, maybe yours does too !
    • Appelo explains how to give written feedback in his other book #Workout. Though better suited for email feedback, I found it a great way to prepare for the session.

"The front page of #Workout chapter about written feedback"

I’d like to ear about your experiences with such collaborative feedbacks.

A Plan for Technical Debt (Lean Software Development Part 7)

The sad truth :

The technical debt metaphor does not help me to fix it.

"A desperate man counting his debts"

Here is my modest 2€ plan about how to try to get out of this.

Why does the metaphor fall short ?

The debt comparison effectively helps non programming people to understand that bad code costs money. Unfortunately, it does not tell you how much. As a consequence, deciding whether it’s best to fix the technical debt or to live with it remains a gut feeling decision (aka programmers want to stop the world and fix all of it while the product owner wants to live with it).

They are very good reason why we cannot measure how much the technical debt costs :

  • It is purely subjective : bad code for someone might be good code for another. Even worse, as you become a better programmer, yesterday’s master piece might become today’s crap. More often, as a team gains insight on the domain, old code might suddenly appear completely wrong …
  • Tools such as Sonar only spot the a small part of the debt. The larger part (design, architecture and domain) remains invisible
  • Finally, non-remediation cost (the time wasted working on the bad code) is often overlooked and very difficult to measure : it depends on what you are going to work in the future !

No surprise it’s difficult to convince anyone else why fixing your debt is a good investment.

"A dilbert cartoon about a programmer killed by technical debt"

The Plan

In the team, we usually try not to create debt in the first place. We have strong code conventions and working agreements. We are doing a lot of refactoring in order to keep our code base clean. But even with all this, debt creeps in :

  • a pair worked on something and did not know that there is another part of the system that does roughly the same thing
  • we understand something new about the domain and some previously fine code becomes debt !
  • like all programmers, we are constantly in a hurry, and sometimes, we just let debt through

If the required refactoring is small enough, we just slip it inside a user story and do it on the fly. The real problem comes larger refactorings.

The strategy to deal with those is to get estimations of both the remediation and non-remediation costs. This way, the technical debt becomes an investment ! Invest X$ now and receive Y$ every month up to the end of the life of product. Provided you have the Cost Of Delay of the product, you can estimate the cost of delay of this individual technical debt fix. For example :

  • Let’s define the product horizon as its expected remaining life span at any moment
  • Suppose the product has a 5 years (60 months) horizon
  • Suppose the Cost Of Delay of the full product is 150K€/month
  • Suppose that the technical debt costs 10 days (0.5 month) to fix
  • Suppose that that once fixed, you’ll save 2 days (0.1 month) of work per month
  • By doing the fix now, at the end of the 5 years, you would have saved : (60 – 0.5) * 0.1 – 0.5 = 5.45 months
  • Using CoD, this ammounts to : 5.45 * 159K = 817.5K €
  • Dividing by the number of months, we finaly get the CoD for this technical debt fix : 817.5K / 60 = 13 625 €/month

This can be compared to the CoD of other backlog items, allowing us to prioritize large refactorings as we would of any feature or story.

One nice thing about this is that it not only helps to know if a refactoring is cost effective, but also when is the best moment to do it. As the CoD of the refactoring is proportional to inverse of the product horizon, a premature refactoring for a startup product might become a real bargain after the product has settled as a market leader. Here are examples of possible product horizons :

Context Horizon
Startup 6 months
3 years old company 3 years
Market leading product 10 years
Aging System 5 years
Legacy System 2 years

Oh, and just one more thing … prioritizing technical debt fixes in your backlog will create some real time to focus on and only on refactoring, reducing task switching and saving even more time.

All this sounds great ! There’s just one last little thing : how do we get estimations of both costs of the technical debt ?

Idea 1 : Collective Estimations

When I attended Donald Reinertsen’s training, I asked him the question and he answered :

I’d gather the top programmers in a room and I’d make them do an estimation of both costs.

So I asked my team if they wanted to do the following :

  1. whenever you spot a large piece of debt, create a JIRA issue for it
  2. at the end of your next sprint planning session, go through all your technical debt issues, and for each
    1. estimate the remediation cost in story points
    2. estimate the non-remediation cost on the coming sprint, taking the prioritized stories into account
  3. using the ROI horizon for every issues, collectively decide which one to tackle and add them to the sprint backlog

To keep the story short, it did not stick. I bet it was just too boring.

Idea 2 : Technical Debt Code Annotations

During a retrospective, we discussed marking technical debt directly in the code to decide when to fix it. I created 2 code annotations so that this can be done. Here is an example of some identified technical debt :

1
2
3
4
5
6
7
8
9
10
11
public final class Transformers {

   private Transformers() {
   }

   @TechnicalDebt(storyPoints = 8, description =
     "We need to find a way to do all the ast rewriting before staring the analysis", wastes = {
     @Waste(date = "2015/05/14", hours = 16, summary =
       "For union, we lost quite some time identifying which transformers were not copying the full tree")})
   public static AstNode analyzeAst(AstNode ast) {
     ...

The @TechnicalDebt annotation identifies areas of the code that could be improved. The @Waste annotation is a way to log time wasted because of this bad code.

By comparing the time to fix the technical debt and the flow of extra work it incurs, we should be able to more easily justify and prioritize these in our backlog.

We are thinking of writing a sonar plugin to keep track of this technical debt right in our Sonar dashboard. It would :

  • create a technical debt item in sonar for every @TechnicalDebt annotation found in the code
  • link it with a mirror technical debt issue in JIRA
  • use the story points we entered in the annotation as remediation cost
  • extrapolate the non remediation cost by the sum of wasted hours registered during the last month

We just started using those, and I cannot give enough feedback for the moment. I bet not enough @Waste items will be entered though … again, it might just be too boring

"A screenshot of Sonar Qube Sqale technical debt plugin"

Idea 3 : Sonar and IDE Plugins

If it’s too boring to add @Waste annotations in the code, it might be easier to have an IDE plugin with 1 big button to register some time wasted on the local @TechnicalDebt zone.

Pushing things a bit further, it might even be possible to estimate non remediation cost by having a look at what files are read the more, what files are triggering the more test failures when changed, etc.

Unfortunately, that’s a long shot, we’re definitely not there yet !

Possible Improvements

The Mikado Method

Whether you’ve got these estimations or not, it’s always a good practice to learn how to use the mikado method. It’s great to split a refactoring into smaller part and spread them over many sprints.

The pill is easier to swallow for everyone, and it keeps the code releasable at any given time.

Decision Rule

Provided you have :

  • Product CoD
  • Top Features CoD
  • Product horizon

You could easily come up with a decision rule to help us prioritizing technical debt more quickly, without the need for a formal planning.

References

This was part 7 of my suite of article about Lean Software Development, Part 6 was You don’t have to ask your boss for a fast build, Part 8 will be “Measure the value of the lean startup ‘learning’”.

Mining GitHub for New Hires

In search of an experienced software engineer

We have been trying to hire such a profile for the last year … The position is hopefully filled now. During that year, we have tried to mine github for candidates. Here is how we did it.

Software engineers, especially experienced, are known to be hard to find. Over the past months, we had steadily been improving our hiring process :

  • By regularly rewriting and optimizing our job post
  • By posting it on Twitter
  • By defining a precise interview template

We went from very few applications to :

  • More applications
  • More experienced candidates
  • Regular interviews
  • Effective interviews

Unfortunately, we were still not interviewing candidates as skilled as we would have liked to. We were convinced that we were offering a great job : the project is very interesting, and the team is a dream to work in.

How could we reach more great devs ?

Someday, I played with github’s Rest Api and I managed to write a short ruby script that finds the contributors to a given project that are living near Paris (France).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
require 'rubygems'
require 'rest_client'
require 'json'

RestClient.proxy = "http://proxy:3128"

def github_get(resource)
  JSON.parse(RestClient.get("https://api.github.com#{resource}", params: {
                              access_token: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
                              per_page: 200}))
end

repo = ARGV[0]


puts "searching for #{repo} contributors in France"


contributors = github_get("/repos/#{repo}/contributors")
logins_locations = contributors.map do |user|
  begin
    login = user['login']
    location = github_get("/users/#{login}")['location']
    {login: login, location: location}
  rescue Exception => e
    puts "could not see details of #{login} #{e}"
    {login: login }
  end
end


puts "Here are all the contributors"
puts logins_locations
french_contributors = logins_locations.select do |login_location|
  location = login_location[:location]
  location != nil and
    (location.downcase.include?('france') or
     location.downcase.include?('paris'))
end

puts "----------------------------"
puts "Here are all the frenchcontributors"
puts french_contributors

What’s next ?

We eventually filled the position before following our github experiment. We might continue some day though ! Here is a list of improvements I thought of :

It really looks like if software is eating recruitment …

Bye Bye Programmer’s TODO List, Hello Personnal Kanban on Jira

Not long ago, I wrote that Real Programmers have TODO lists … I was wrong, I now work without a TODO list ! So either I’m not a real programmer anymore, or I’m actually using TODO List v.2.0. Read on !

Motivations

My work has become quite varied lately. On top of programming and pairing, I am also doing quite some coaching within the team. For the whole Murex programmers community, I’m organizing Coding Dojos, Brown Bag Lunches and Technical Talk Video Sessions. Finaly, like all of us, I have to cope with my share of organization overhead.

Multitasking was starting to kill me. I was feeling exhausted at the end of the day, with the certainty that I was not getting much done …

Personnal Kanban To The Rescue

Kanban is a method to organize your work relying on Work In Progress limits : it minimizes multitasking and encourages prioritization.

As its name suggests it, Personal Kanban is simply applying Kanban to your own tasks. It turns out that :

  • My team tasks are already in JIRA
  • Some guys have already tried to use JIRA as a todo list
  • JIRA supports Kanban boards with WIP limits and all

The Kanban Board

In our team, TAYARA Bilal had already experimented the approach and asked to JIRA admins to create us a custom project for our todo lists. I piggybacked on it and created my own kanban board. Here is what it looks like.

Mixing Project Stories And Personnal Tasks

JIRA allows creating a kanban board that spans many projects ! You can simply choose multiple projects when you setup the board for the first time, or you can edit your board filter like this :

1
project in (POPIMDB, POPABTODO) ...

This makes it possible to see all of my work at a glance on the same board.

Work In Progress Limit

JIRA allows WIP limits, warning me with a red background when I am multitasking or when I am getting late on my tasks

Color Conventions

JIRA makes it possible to assign different colors to cards, for example

  • Red for tasks that are due soon
  • orange for cards that are due some time
  • light brown for project stories
  • green for other programming tasks
  • blue for other tasks

Swimlanes

JIRA has swimlanes, separating project from personal tasks

Reports

An extra bonus with JIRA Kanban board is that they have reports ! Here is my cumulative flow diagram for my first week of usage :

Configuration

Here is the JQL query I used to configure it this way.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- board filter
project in (POPIMDB, POPABTODO) AND (Assignee = pbourgau OR Co-Assignees in (pbourgau) OR mentors in (pbourgau)) AND (status != CLOSED OR updated >= -1d) ORDER BY Rank ASC

-- Swimlanes
priority = "1-Very High" -- Expedite
project in ("POP IMDB") and (labels not in (SLACK) OR labels is EMPTY) -- IMDB Stories
-- and a blank filter for Other Tasks

-- Card Colours
duedate <= 7d or priority = "1-Very High" -- red
duedate is not EMPTY -- orange
labels in (SLACK) -- green
type = Task -- blue
-- and an empty filter for light brown

The End Result

By setting a WIP limit of 3 on the “In Progress” column, the following naturally happened :

  • Once I have started a programming task, I now defer any other activity in the TODO column until I am finished. (HINT: If you get invited to meetings all the time, lock your agenda with ‘Unbookable’ days when you start programming)
  • It actually pushed me into finishing the concurrency-kata training I had started long ago.

I also set a high WIP limit (around 10) on the TODO column, this way, I get a kind of warning that next time I finish a programming task, I should take some time off to prune the column.

The overall result is that I do lot less multitasking. I get the feeling of doing steadier, more efficient work.

If you are suffering from multitasking and decide to give it a try, I’d love to read about your experience !

Actors and Green Threads in Java Demystified

After finishing my concurrency-kata, one of the things that most surprised me, is how simple it was to prototype the Actor Model in Java using Green Threads.

The Code

First, here is the base class for all actors.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
public class Actor implements Runnable {
    private final ExecutorService threadPool;
    private final ConcurrentLinkedQueue<Runnable> mailbox = new ConcurrentLinkedQueue<>();
    private boolean stoped;

    public Actor(ExecutorService threadPool) {
        this.threadPool = threadPool;
    }

    public void run() {
        if (stoped) {
            return;
        }

        Runnable nextMessage = mailbox.poll();
        if (nextMessage != null) {
            nextMessage.run();
        }
        submitContinuation();
    }

    public void start() {
        submitContinuation();
    }

    protected void stop() {
        stoped = true;
    }

    protected void send(Runnable runnable) {
        mailbox.add(runnable);
    }

    private void submitContinuation() {
        threadPool.submit(this);
    }
}

As you can see, I simply used Runnable as the type of the messages.

The Actor itself is Runnable, meaning that it can be submitted to the thread pool. When executed :

  1. it tries to handle a message from the mailbox if there is one.
  2. It then re-submits the actor itself

This ensures that only one thread is executing messages on an actor at a given time, and it also avoids spawning new thread for every new actor.

As an example, here is how I used this to make an actor of an existing InProcessChatRoom class.

1
2
3
4
public interface ChatRoom {
    void broadcast(Output client, String message);
    ...
}
1
2
3
public class InProcessChatRoom implements ChatRoom {
  ...
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public class ChatRoomActor extends Actor implements ChatRoom {

    private final ChatRoom realChatroom;

    public ChatRoomActor(ChatRoom realChatroom, ExecutorService threadPool) {
        super(threadPool);
        this.realChatroom = realChatroom;
        start();
    }

    @Override
    public void broadcast(final Output client, final String message) {
        send(new Runnable() {
            @Override
            public void run() {
                realChatroom.broadcast(client, message);
            }
        });
    }
    ...
}

ChatRoomActor is in fact some kind of proxy to use from other actors to send messages to the chat room.

As with any implementation of the Actors Model, the neet thing is the separation of threading and logic. That makes it so much simpler ! (You can get more detail about the complexity I am talking about by taking a look at the concurrency-kata)

Performances

Here is a performance summary of this implementation compared to others on a “throughput vs clients” benchmark of the style “Enter while others are talking”

Results can be disappointing compared to other implementations but the example itself is a bit misleading. The chatroom does exclusively message passing, there is not much computation to parallelize, in a different setting, the results would have been completely different.

Limitations

As you can see, this implementation is just a quick prototype, nothing production ready. Here are the main limitations I can think of right now :

  • It uses Busy Waiting for the next message, meaning that it consumes unnecessary resources, and that more important messages to other Actors might be delayed
  • Usually, actor messages are selected on their type rather than on their order of arrival, this is not the case here
  • The usage of the Runnable interface as base message type, though handy, opens the door to inter-thread calls that might violate the model altogether
  • There is absolutely no support of out of process actors until the messages are Serializable

Going Further

I started this concurrency-kata as a training material about concurrency for colleagues at work. In the spirit of the coding kata, it’s a git repo you can walk, explore and experiment with.

So if you want to learn more about different models of concurrency, you are welcome to have a look at the How-To section in the README file.

On my part, although it was a lot more work than I would have guessed at the beginning, I barely scratch the surface of the subject ! I could now :

  • extract the CSP or Actor implementation and make them more robust
  • practice and present the whole kata as a 2 hours live coding session
  • prepare a hands-on training about concurrency

So, if you are willing to do any of the above you are welcome to contribute !

You Don’t Have to Ask Your Boss for a Fast Build (Lean Software Development Part 6)

A slow build costs money. I mean it costs a whole lot of money all the time !

Spending some time to speed up the build is like an investment, you’ll pay some money now, but then it’s only a matter of time until you get a return on investment. Here is the trick, if you manage to get it quickly, no one will even notice that you spent some time making the build faster !

With a bit of maths, you can even get what Reinertsen calls a “Decentralized Decision Rule”, making it possible for anyone in the organization to figure out if he should spend some time on the build or not; without the need to ask the permission to anyone.

Our example

Our team is constituted of 5 pairs, each running the build at least 10 times per day. Let’s figure out the value of 1 minute build time speed-up

  • The whole team would save : 1m x 5 pairs x 10 builds = 50 minutes per day
  • In a 2 weeks sprint, this would amount to around 1 day of work

This means that if a pair spends half a day to get a 1 minute build speed-up, it would not change the output of the sprint, and it would in fact increase the throughput of the team for further sprints.

Anyone in our team that spots a potential 1 minute build time speed-up that would take less that 1 man.day to implement should do it right away, without asking the permission to anyone

Other Benefits

A direct benefit is that the issue will not have to be re-discussed every time someone spots a build time improvement. This will save some management time, and more build speed-up actions will eventually be undertaken.

The astute lean reader will have noticed that I completely ignored the second effect of fast feedback :

  • if the build is faster
  • we will run it more often
  • we’ll spot errors earlier
  • less errors will be submitted
  • the overall throughput will be increased even more

Another hidden benefit concerns the Cost of Delay (the cost of not selling the product NOW). As Cost of Delay typically trumps the work costs, this means that any improvement to the build time will bring even greater ROI in the long term.

Variations

If your sponsor agrees, you can negotiate a longer return on investment period for your decision rule. For example, if he agreed to increase the horizon to 2 sprints, we could undertake more build time speed-up tasks. You might also prefer only to discuss really long ROI investments with him.

While designing the 777 Boeing used a similar decision rule to meet the required weight of the plan : any engineer could increase the production cost of 300$ provided it saved a pound of weight on the plane. This fixed issues they previously had with department weight budgets and escalation.

Finally, it would be great if we had the same rule for technical debt ! Imagine that you knew both the costs of fixing and not fixing your technical debt, you could then decided whether it makes sense to work on the debt right now or not. But that’s for a later experiment.

This was part 6 of my Lean Software Development Series. Part 5 was What optimization should we work on ?, Part 7 will be A Plan for Technical Debt.

The Agile Change Management Viral Hack

We just discovered a hack to spread agile (BDD by the way) through an organization. It works provided :

  • There is a BDD testing expert in your team
  • Your team is using the work of another software team from your company

If this team does not use agile practices, they are likely to regularly create regressions or to be late at providing new versions.

Use your client role in the relation, and propose your help ! Spend some time with them to help them put automated customer tests in place. Be genuinely nice with them, show example, be available and, obviously, bring improvement. With some chance, they might soon be asking for more.

1
2
3
Given there are too many bugs
When you can help
Then DO IT !

Real Programmers Have Todo Lists

Productive programmers maintain a todo list. No Exception.

Why is it so important

As programmers, here is the typical internal discussion we have all day long :

- Why the hell am I doing this again ?
… hard thinking …
- Yes ! I remember now :
- Encapsulate this field
- In order to move it to another class
- In order to move this other function there too
- In order to be able to remove that other static variable
- In order to refactor the login module
- In order to remove the dependency between the custom search query generator and the login module
- In order to refactor the query generator
- In order to be able to optimize it
- In order to speed up the whole website !

Phew, now that’s a list ! A 9 frame stack, all in our heads, and that’s only a simple example. Knowing that us humans usually have around 7 ‘registers’ in our brains, this makes a lot of clutter to remember.

Maintaining all this in a todo list frees us some brainpower !

What happens when you use a todo list

Quite a lot in fact :

  • It’s satisfying to check something as done !
  • Our programming gets better, because we can fully concentrate on it
  • We have a clear idea about what’s done, what’s still to be done, and why we are doing it
  • We avoid getting lost in things that don’t really need to be done
  • We can make better choices about what to do, what to postpone, and what not to do
  • We can make more accurate estimates about the time it will take to finish the job

In the end, all this makes you feel less stressed and more productive !

How to do it

There are many ways to maintain a todo list. Which to choose is not as important as having one. Here are my 2 design principles for a todo list system :

  • It goes in finer details than a typical bug tracking software
  • It should help you to concentrate on the few items you can do in the coming hours

For example, I am now using a simple TODAY … TOMORROW … LATER … scheme. I tend to avoid deep hierarchies as it gets in the way of my second principle. I like to keep DONE items visible to keep track of what I did for up to 1 day.

Here is a list of tools you can use to set up a todo list :

  • Any text editor using a simple format convention will do
  • Dropbox or any other synchronization tool can be helpful to access it from different places
  • Org Mode of Emacs has built-in support for todo lists. It’s a simple text file, but with color highlighting and shortcuts
  • Google Keep might do just fine for you
  • Google Docs can also be useful, especially if you need to share your todo list with others (when pair programming for example)
  • Trello is also a good one, it can even be used as a personal kanban board
  • Any other todo list tool that suits you !

If you are not already using a todo list, start now and become more productive ! No excuse !

EDIT 2015-08-18 : I am now using Personnal Kanban instead of TODO lists.

Trellospectives : Remote Retrospectives With Trello

As a distributed team working from Paris and Beirut, after pair programming, it was time for our retrospectives to get remote !

Initial setup

At first we were using the visio conference system. The retrospective facilitator would connect with the remote participants through instant chat and forward theirs post-its. We also used an extra webcam connected to the laptop in order to show the whiteboard in the other room.

Pros

  • Anyone can do it now
  • Kind of works

Cons

  • We often used to loose 5 minutes setting all the infrastructure up
  • The remote room cannot see the board clearly through the webcam
  • The animator has to spend his time forwarding the other room’s points
  • There is a ‘master’ and a ‘slave’ room

Sensei Tool

When Mohamad joined the team in Beirut, we thought that this was not going to scale … We decided to try something else. With the availability of the new conferencing system, we had the idea to use a web tool to run the retro. We found and tried senseitool.com. After creating accounts for every member of the team and scheduling a retrospective through the tool, we could all equally participate using our smartphones. The retrospective follows a typical workflow that is fine for teams new to the practice.

Pros

  • Even easier to setup
  • Works fine

Cons

  • The website was a bit slow
  • The retrospective was too guided for an experienced team, we did not get as good outputs as we used to
  • Everyone could participate as easily

Trello

Asking Google, we discovered that some teams were having success using Trello for their remote retrospectives. We decided to give it a try. Ahmad from Beirut got to work on our first retrospective with it. He had to prepare it beforehand (as we always have been doing). In practice :

  • Ahmad created an organization for our team
  • We all registered to Trello and joined the organization (we like joining (smile))
  • Ahmad created a custom board for each activity
  • During the meeting, we used the video conferencing system and the instant chat to have both visio and screen sharing
  • The animator used a laptop to manage the Trello boards
  • Everyone of us could add post-its through his smartphone app

Pros

  • The setup is easy
  • The retrospective worked well and delivered interesting output
  • We actually all see the board
  • The smartphone app works well
  • It is possible to vote directly through Trello
  • Everyone could participate as easily
  • We can classify post-its with labels
  • We can insert pictures and photos
  • There are a lot of chrome extensions to Trello (Vertical Lists for Trello), Card Color Titles for Trello

Cons

  • There is nothing to ‘group’ post its together
  • We need to prepare custom boards for every activity
  • We would need to pay for the gold version with custom backgrounds and stickers

Conclusion

While missing a few features that would make it awesome, Trello is the best tool we found for remote retrospective, and is better than our initial physical setup. We’re continuing to use it, and we now have to figure out

  • If we could find a way to speed up the meeting preparation
  • How to handle ‘graph oriented’ activities such as the ‘5 whys’

What Optimization Should We Work on (Lean Software Development Part 5)

At work, we are building a risk aggregation system. As it’s dealing with a large bunch of numbers, it’s a huge heap of optimizations. Once that its most standard features set is supported, our job mostly consists of making it faster.

That’s were we are now doing.

How do we choose which optimization to work on ?

The system still being young, we have a wide range of options to optimize it. To name just a few : caches, better algorithms, better low level hardware usage …

It turns out that we can use the speedup factor as a substitute for business value and use known techniques to help us to make the best decisions.

Let’s walk through an example

I. List the optimizations you are thinking of

Let’s suppose we are thinking of the following 3 optimizations for our engine

  • Create better data structures to speed up the reconciliation algorithm
  • Optimize the reconciliation algorithm itself to reduce CPU cache misses
  • Minimize boxing and unboxing

II. Poker estimate the story points and speedup

Armed with these stories, we can poker estimate them, by story points and by expected speedup. As a substitute for WSJF, we will then be able to compute the speedup rate per story point. We will then just have to work on the stories with the highest speedup rate first.

Title Story Points /10 /2 -10% ~ +10% x2 x10 Expected Speedup ratio* Speedup rate / story point**
Data Structures 13 4 votes 5 votes x 1.533 x 1.033
Algorithm 13 1 vote 1 vote 2 votes 1 vote 2 votes 2 votes x 1.799 x 1.046
Boxing 8 9 votes x 1.1 x 1.012

* Expected speedup ratio is the logarithmic average of the voted speedups
** Speedup rate is “speedup(1/ story points)

So based on speedup rate, here is the order in which we should perform the stories :

  1. Algorithm
  2. Data Structures
  3. Boxing

III. And what about the risks ?

This poker estimation tells us something else …

We don’t have a clue about the speedup we will get by trying to optimize the algorithm !

The votes range from /2 to x10 ! This is the perfect situation for an XP spike.

Title Story points Expected Speedup rate
Algorithm spike : measure out of context CPU cache optimization speedup 2 ?

In order to compute the expected speedup rate, let’s suppose that they are 2 futures, one where we get a high speedup and another where we get a low one.

They are computed by splitting the votes in 2 :

  • low_speedup = 0.846
  • high_speedup = 3.827

If the spike succeeds

We’ll first work on the spike, and then on the algorithm story. In the end, we would get the speedup of the algorithm optimization.

  • spike_high_speedup = high_speedup = 3.827

If the spike fails

We’ll also start by working on the spike. Afterwards, instead of the algorithm story, we’ll tackle another optimization stories, yielding our average speedup rate for the duration of the algorithm story. The average speedup rate can be obtained from historical benchmark data, or by averaging the speedup rate of the other stories.

  • average_speedup_rate = (1.033 * 1.011)½ = 1.022
  • spike_low_speedup = average_speedup_ratestory_points = 1.02213 = 1.326

Spike speedup rate

We can now compute the average expected speedup rate for the full period ‘spike & algorithm’ stories. From this we will be able to get the speedup rate and finally, to prioritize this spike against the other stories in our backlog.

  • spike_speedup = (spike_low_speedup * spike_high_speedup)½ = 2.253
  • spike_speedup_rate = spike_speedup1/(spike_story_points + algorithm_story_points) = 2.2531/(2 + 13) = 1.056

IV. Putting it all together

Here are all the speedup rate for the different stories.

Title Speedup rate / story point
Data Structure x 1.033
Algorithm x 1.046
Boxing x 1.012
Algorithm spike x 1.056

Finally, here is the optimal order through which we should perform the stories :

  • Algorithm spike
  • Algorithm (only if the spike proved it would work)
  • Data Structures
  • Boxing

Summary

The math are not that complex, and a simple formula can be written to compute the spike speedup rate :

I think most experienced engineers would have come to the same conclusion by gut feeling …

Nevertheless I believe that systematically applying the such method when prioritizing optimizations can lead to a greater speedup rate than the competition in the long run. This is a perfect example where taking measured risks can payoff !

This was part 5 of my Lean Software Development Series. Part 4 was Measure the business value of your spikes and take high payoff risks, Part 5 will be You don’t have to ask your boss for a fast build.