Setting Up Octopress With Vagrant and Rbenv

I recently got hands on an abandonned laptop that was better than the one I was currently using for my personnal hackings, so I decided to switch to this one. I felt this was the time to learn Vagrant and save me some time later on. I settled on creating a Vagrant environment for this Octopress blogging. That proved a lot longer than I thought it would.

If you want to jump to the solution, just have a look at this git change. Here is the slightly longer version.

  • Add a Vagrantfile and setup a VM. There are explainations about how to do this all over the web, that was easy.

  • Provision your VM. That proved a lot more complex. There are a lot of examples using variants of Chef, but the steep learning curve for Chef seemed unneccessarily complex compared to what I wanted to do. Eventually, I figured it out using simple shell provisioning.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
  config.vm.provision "shell", inline: <<-SHELL
    echo "Updating package definitions"
    sudo apt-get update

    echo "Installing git and build tools"
    sudo apt-get -y install git autoconf bison build-essential libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm3 libgdbm-dev
  SHELL

  config.vm.provision "shell", privileged: false, inline: <<-SHELL
    git config --global user.name "john.doe"
    git config --global user.email "john.doe@mail.com"

    if [ ! -d "$HOME/.rbenv" ]; then
      echo "Installing rbenv and ruby-build"

      git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
      git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build

      echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
      echo 'eval "$(rbenv init -)"' >> ~/.bashrc

    else
      echo "Updating rbenv and ruby-build"

      cd ~/.rbenv
      git pull

      cd ~/.rbenv/plugins/ruby-build
      git pull
    fi

    export PATH="$HOME/.rbenv/bin:$PATH"
    eval "$(rbenv init -)"

    if [ ! -d "$HOME/.rbenv/versions/2.2.0" ]; then
      echo "Installing ruby"

      rbenv install 2.2.0
      rbenv global 2.2.0

      gem update --system
      gem update

      gem install bundler
      bundle config path vendor/bundle

      rbenv rehash
    fi

    cd /vagrant
    bundle install

    if [ ! -d "/vagrant/_deploy" ]; then
      bundle exec rake setup_github_pages["git@github.com:philou/philou.github.com"]
      git checkout . # Revert github deploy url to my domain
      cd _deploy
      git pull origin master # pull to avoid non fast forward push
      cd ..
    fi
  SHELL
  • Setup port forwarding. That should have been simple … after forwarding port 4000 to 4000, I could still not manage to access my blog preview from the host machine. After searching throughout the web for a long time, I eventually fixed it with by adding --host 0.0.0.0 to the rackup command line in Octopress Rackfile

  • Setup ssh forwarding. In order to be able to deploy to github pages with my local ssh keys, I added the following to my Vagrantfile.

1
2
3
4
5
6
7
8
9
10
11
  # The path to the private key to use to SSH into the guest machine. By
  # default this is the insecure private key that ships with Vagrant, since
  # that is what public boxes use. If you make your own custom box with a
  # custom SSH key, this should point to that private key.
  # You can also specify multiple private keys by setting this to be an array.
  # This is useful, for example, if you use the default private key to
  # bootstrap the machine, but replace it with perhaps a more secure key later.
  config.ssh.private_key_path = "~/.ssh/id_rsa"

  #  If true, agent forwarding over SSH connections is enabled. Defaults to false.
  config.ssh.forward_agent = true
1
2
vagrant plugin install vagrant-vbguest
vagrant reload

I’ll tell you if this does not do the trick.

I admit it was a lot longer than I expected it to be, but at least now it’s repeatable !

Next steps will be to use Docker providers and Dockerfile to factorize provisioning and speedup up VM startup.

Measure the Business Value of Your Spikes and Take High Payoff Risks (Lean Software Development Part 4)

Lately at work, we’ve unexpectedly been asked by other teams if they could use our product for something that we had not forseen. As we are not sure whether we’ll be able to tune our product to their needs, we are thinking about doing a short study to know the answer. This looks like a great opportunity to try out Cost of Delay analysis about uncertain tasks.

Unfortunately, I cannot write the details of what we are creating at work in this blog, so let’s assume that we are building a Todo List Software.

We have been targeting the enterprise market. Lately, we’ve seen some interest from individuals planning to use our todo list system for themselves at home.

For individuals, the system would need to be highly available and live 24/7 over the internet, latency will also be critical to retain customers, but the product could get a market share with a basic feature set.

On the other side, enterprise customers need advanced features, absolute data safety, but they can cope with nightly restarts of the server.

In order to know if we can make our todo list system available and fast enough for the individuals market, we are planning to conduct a pre-study, so as not to waste time on an unreachable goal. In XP terms, this is a spike, and it’s a bunch of experiments rather than a theoretical study.

When should we prioritize this spike ?

If we are using the Weighted Shortest Job First metric to prioritize our work, we need to estimate the cost of delay of a task to determine its priority. Hereafter I will explain how we could determine the value of this spike.

Computing the cost of delay

The strategy to compute the cost of delay for such a risk mitigation task is to compute the difference in cost of delays with or without doing it.

1. The products, the features, the MVP and the estimates

As I explained in a previous post, for usual features, cost of delay is equivalent to it’s value. Along with our gross estimates, here are the relative values that our product owner gave us for the different products we are envisioning.

Feature $ Enterprise $ Individuals Estimated work
Robustness 20* 20* 2
Availability 0 40* 2
Latency 0 40* 1
Durability 40* 13 2
Multi user lists 20* 8 2
Labels 20 13 2
Custom report tool 13 0 3
TOTAL Cost Of Delay of v1 80 100

Stared (*) features are required for the first version of the product. Features with a value of 0 are not required for the product. Eventually, unstared features with a non null business value would be great for a second release.

It seems that the individuals market is a greater opportunity, so it’s worth thinking about it. Unfortunately for the moment, we really don’t know if we’ll manage to get the high availability that is required for such a product.

The availability spike we are envisioning would take 1 unit of time.

2. Computing the cost of delay of this spike

The cost of delay of a task involving some uncertainty is the probabilistic expected value of its cost of delay. We estimate that we have 50% of chances of matching the availability required by individuals. It means that CoD of the spike = 50% * CoD if we match the latency + 50% CoD if we don’t match the availability.

2.a. The Cost of Delay if we get the availability

Let’s consider the future in which we’ll manage to reduce the latency. The cost of delay of a spike task is the difference in Cost with and without doing the spike, per relevent months.

2.a.i. The cost if we don’t do the spike

Unfortunately, at this point in this future, we don’t yet know that we’ll manage to get to the availability.

Feature $ Enterprise $ Individuals $ Expected Estimated work WSJF
Latency 0 40* 20 1 20
Durability 40* 13 26 2 13
Robustness 20* 20* 20 2 10
Availability 0 40* 20 2 10
Labels 20 13 17 2 9
Multi user lists 20* 8 14 2 7
Custom report tool 13 0 8 3 3

We’ll resort to WSJF to prioritize our work. Here is what we’ll be able to ship :

Product Delay CoD Cost
Individuals 7 100 700
Individuals Durability 7 13 91
Individuals Labels 9 13 117
Enterprise 11 80 880
Enterprise labels 11 20 220
Individuals Multi user lists 13 8 104
Enterprise Custom reports 16 13 208
2320

2.a.ii. The cost if we do the spike

In this case, we would start by the spike, and it would tell us that we can reach the individuals availability and so that we should go for this feature first. Here will be our planning

Feature $ Enterprise $ Individuals Estimated work Enterprise WSJF Individuals WSJF
Feasibility spike 1
Latency 0 40* 1 40
Availability 0 40* 2 20
Robustness 20* 20* 2 10 10
Durability 40* 13 2 20 7
Multi user lists 20* 8 2 10 4
Labels 20 13 2 10 7
Custom report tool 13 0 3 4

Here is how we will be able to ship :

Product Delay CoD Cost
Individuals 6 100 600
Individuals Durability 8 13 104
Individuals Multi user lists 10 8 80
Enterprise 10 80 800
Individuals Labels 12 13 156
Enterprise Labels 12 20 240
Enterprise Custom reports 15 13 195
2175

2.a.iii. Cost of delay of the spike if we reach the availability

By making the spike, we would save 2320 – 2175 = 145$

Without doing the spike, we would discover whether we would reach the availability when we try it, around time 7 (see 2.a.i).

So the cost of delay for the spike would be around 145/7 = 21 $/m

2.b. The Cost of Delay if we don’t get the availability

Let’s now consider the future in which we don’t manage to increase the availability.

Using the same logic as before, let’s now see what happens

2.b.i. The cost if we don’t do the spike

Unfortunately, at this point in this future, we don’t yet know that we’ll not manage to get to the availability.

Feature $ Enterprise $ Individuals $ Expected Estimated work WSJF
Latency 0 40* 20 1 20
Durability 40* 13 26 2 13
Robustness 20* 20* 20 2 10
Availability 0 40* 20 2 10
Multi user lists 20* 8 14 2 7
Labels 20 13 17 2 9
Custom report tool 13 0 8 3 3

When we’ll fail at the availability, we’ll switch multi user lists and labels to be able to ship to enterprises as quickly as possible. Here is what we’ll ship.

Product Delay CoD Cost
Enterprise 9 80 720
Enterprise Labels 11 20 220
Enterprise Custom reports 14 13 182
1122

2.b.ii. The cost if we do the spike

In this case, we would start by the spike, and it would tell us that we won’t match the availability required for individuals and so that that there’s no need to run after this now.

Feature $ Enterprise Estimated work WSJF
Feasibility spike 1
Durability 40* 2 13
Robustness 20* 2 10
Multi user lists 20* 2 7
Labels 20 2 9
Custom report tool 13 3 3

Here is how we will be able to ship :

Product Delay CoD Cost
Enterprise 7 80 560
Enterprise Labels 9 20 180
Enterprise Custom reports 12 13 156
896

2.b.iii. Cost of delay of the spike if we reach the availability

By making the spike, we would save 1122 – 896 = 226$

As before, without doing the spike, we would discover whether we would get the availability when we try it, around time 7.

So the cost of delay for the spike is around 226/7 = 32 $/m

2.c. Compute overall Cost of Delay of the Spike

Given that we estimate that there is a 50% chances of making the latency, the overall expected cost of delay is

50% * 21 + 50% * 32 = 26.5 $/m

Inject the spike in the backlog

With the Cost of Delay of the spike, we can compute it’s WSJF and prioritize it against other features.

Feature $ Enterprise $ Individuals Expected $ Estimated work WSJF
Feasibility Spike 26.5 1 26.5
Latency 0 40* 20 1 20
Durability 40* 13 26 2 13
Robustness 20* 20* 20 2 10
Availability 0 40* 20 2 10
Multi user lists 20* 8 14 2 7
Labels 20 13 17 2 9
Custom report tool 13 0 8 3 3

The spike comes at the top of our backlog. Which confirms our gut feeling.

Conclusion

Doing this long study confirmed classic rule of thumbs

  • Don’t develop many products at the same time
  • Do some Proof Of Concepts early before starting to work on uncertain features
  • Tackle the most risky features first

By improving the inputs, we could get more quality results :

  • If we had access to real sales or finance figures for the costs
  • If we did some sort of poker risk estimation instead of just guessing at 50% chances

Obviously, the analysis itself is not perfect, but it hints to the good choices. And as Don Reinertsen puts it, using an economical framework, the spread between people estimations goes down from 50:1 to 2:1 ! This seems a good alternative to the experience and gut feeling approach which :

  • can trigger heated unfounded discussions
  • often means high dependence on the intuition of a single individual

As everything is quantitative though, one could imagine that with other figures, we could have got to another conclusion, such as :

  • The spike is not worth doing (it costs more than it might save)
  • The spike can wait a bit

This was part 4 of my Lean Software Development Series. Part 3 was How to measure your speed with your business value, Part 5 will be “Measure the value of the lean startup ‘learning’”.

From Zero to Pair Programming Hero

In my team at Murex, we’ve been doing Pair programming 75% of our time for the past 9 months now.

Before I explain how we got there, let’s summarize our observations :

  • No immediate productivity drop
  • Pair programming is really tiring
  • Quality expectations throughout the team soared up
  • As a result, the quality actually increased a lot
  • But existing technical debt suddenly became incompatible with the team’s own quality criterion. We went on to pay it back, which slowed us down for a while
  • Productivity is regularly going up as the technical debt is reduced
  • It helped us to define shared coding conventions
  • Pair programming is not for everyone. It has likely precipitated the departure of one team member
  • It certainly helped the team to jell
  • Newcomers can submit code on day 1
  • The skills of everyone increase a lot quicker than before
  • Bonus : it improves personal skills of all the team members

If you are interested in how we got there, read on, here is our story :

Best Effort Code Reviews

At the beginning, only experienced team members were reviewing the submitted code, and making remarks for improvement over our default review system : Code Collaborator.

This practice revealed tedious, especially with large change lists. As it was not systematic, reviewers constantly had to remind to the reviewees to act on the remarks, which hindered empowerment.

Systematic Code Reviews

Observing this during a retrospective, we decided to do add code review to our Definition Of Done. Inspired by best practices in the Open Source world, we created a ruby review script that automatically creates Code Collaborator reviews based on the Perforce submits. Everyone was made observer of any code change, and everyone was to participate in the reviews.

At first, to make this practice stick, a few benevolent review champions had to comment all the submitted code; once the habit was taken, everyone participated in the reviews.

Code Collaborator spamming was certainly an issue, but using Code Collaborator System Tray App helped each of us to keep up to date with the remaining reviews to do.

Bonus : As everyone was doing reviews, and that reviews of small changes are easier, submits became smaller.

This was certainly an improvement, but it remained time consuming. We believed we could do better.

Pair Programming

After 1 or 2 months of systematic code reviews, during a retrospective (again) nearly all the team members decided to give pair programming a try.

We felt the difference after the very first day : pair programming is intense, but the results are great. We never turned back.

With pair programming in place, we had to settle a pair switching frequency. We started with the full story, tried a one day rotation, and eventually settled on “MIN(1 week, the story)”.

This is not set in stone and is team dependent. It may vary depending on the learning curve required to work on a story. We might bring it down later maybe.

Remote Pair Programming

Ahmad, a Murex guy from Beirut joined the team a few months ago. We did not want to change our way of working, and decided to try remote pair programming.

Initial Setup

At the beginning, we were using Lync (Microsoft’s chat system) with webcams, headphones and screen sharing. It works, but Lync’s screen sharing does not allow seamless remote control between Paris and Beirut. Here is how we coped with this :

  • Only exceptionally use Lync’s “Give Control” feature : it lags too much
  • Do small submits, and switch control at submits
  • When you cannot submit soon, shelve the code on perforce (you would just pull your buddy’s repo with git), and switch control

As a result, Ahmad became productive a lot more quickly. We are not 2 sub teams focusing on their own area of expertise, but 1 single distributed team sharing everything.

Improved Setup

Remote pair programming as such is workable, but does not feel as easy as being collocated. Here are a few best practices we are now using to improve the experience :

  • Keep your pair’s video constantly visible : either on your laptop of in a corner of you main screen, but it’s important to see his facial expression all the time
  • In order to allow eye contact, place your cam next to the window containing the video of your pair.
  • Using 2 cameras, ManyCams and a small whiteboard allows to share drawings !

Future Setup

We are currently welcoming a new engineer in Beirut, and as we will be doing more remote pair programming, we’ll need to make this more seamless. Control sharing and lag through Lync remain the main issues. We don’t have a solution for that yet, but here are the fixes we are looking into

  • Saros is an Eclipse plugin for remote concurrent and real time editing of files. Many people can edit the files at the same time. We are waiting for the Intellij version that is still under development

  • Floobits is a commercial equivalent of saros. We tried it and it seems great. It’s not cheap though, especially with in-house servers.
  • Screenhero is a commercial low-lag, multi cursor screen sharing tool. Unfortunately, it currently does not work behind a proxy, and so we were not able to evaluate it yet.

Final thoughts

I believe that collocated, and remote, pair programming are becoming key skills for a modern software engineer.

I hope this will help teams envisioning pair programming. We’d love to read about your best practices as well !

Can Agile Teams Commit?

Making commitments to deliver software is always difficult. Whatever the margin you might take, it invariably seems wrong afterward …

Most Scrum, XP or Kanban litterature does not adress the issue, simply saying that commitment is not required, as long as you continuously deliver value (faster than your competition). That’s kind of true, but sometimes you need commitments, for example if your customer is not yet ready to deploy your new software every friday worldwide …

So how can you do it while remaining agile ?

Grossly speaking, you have 2 options :

Do it as usual

Discuss with your experts, take some margin, do whatever voodoo you are used to. This will not be worse than it used to be. It might turn out better, thanks to your agile process, you should be able to deploy with a reduced the scope if needed.

Use your agile process metrics

This technique is explained in The Art of Agile Development, in section “Risk Management”, page 227.

Let’s estimate the time you’ll need before the release

  • First, list all the stories you want it your release
  • Then estimate them with story points.
  • Now that you have the total number of story points you’ll have to deliver before the release, apply a generic risk multiplier :
Chances of making it Using XP practices Otherwise Description
10% x1 x1 Almost impossible (ignore)
50% x1.4 x2 50-50 chance (stretch goal)
90% x1.8 x4 Virtually certain (commit)

As explained in The Art of Agile Development page 227, these numbers com from DeMarco’s Riskology system. Using XP practices means fixing all bugs at all iteration, sticking rigorously to DONE-DONE, and having a stable velocity over iterations.

This factor will account for unavoidable scope creep and wrong estimations. * Use you iteration velocity to know how many sprints you’ll need to finish.

For example :

Suppose you have 45 stories that account for a total of 152 story points, and that your velocity is 23 story points per iteration. You need to do a commitment, but hopefully, you are applying all the XP practices. So you can compute :

Number of sprints = 152*1.8/23 = 12 sprints, (24 weeks, or about 5.5 months)

What about unknown risks ?

Unfortunately, using just the previous, you might miss some unavoidable important tasks you’ll need to do before you can release. Think about monitoring tools and stress testing, when did your product owner prioritize these ? These are risk management activities that need to be added to your backlog in the first place. Here is how to list most of them.

  • Do a full team brainstorming about anything that could possibly go bad for your project
  • For every item discovered, estimate
    • It’s probability of occurrence (low, medium, high)
    • It’s consequences (delay, cost, cancellation of the project)
  • For every item, decide whether to
    • avoid it : you have to find a way to make sure this will not happen
    • contain it : you’ll deal with the risk when it occurs
    • mitigate it : you have to find a way to reduce it’s impact
    • ignore it : don’t bother with unlikely risks of no importance
  • Finally, create stories to match your risk management decisions. These might be :
    • Monitoring systems helps to contain a risk
    • Logging helps to mitigate a risk
    • An automated scaling in script for situations of high demand helps both mitigate and contain the risk
  • Simply add these stories to your backlog, and repeat the previous section. You can now make your commitment

Afterthoughts

Contrary to the widespread belief, agile practices and metrics actually helps to make commitments.

It would be better if we had project specific statistics instead of generic risk multipliers. It’s a shame that task tracking tools (JIRA & friends) still don’t help us with this.

We should keep in mind though, that estimating all your backlog in advance takes some time and is actually some kind of waste. If possible, just sticking to build (and sell) the thing that is the most useful now is more simple (this guy calls it drunken stumble).

Tech Mesh 2012 – Building an Application Platform: Lessons from CloudBees – Garrett Smith from Erlang Solutions on Vimeo.

Performance Is a Feature

Now that is a widespread title for blog articles ! Just search Google, and you’ll find “Performance is a feature” in Coding Horror and others.

What’s in it for us ?

If performance is indead a feature, then it can be managed like any feature :

  • It should result from use cases

    During use case X, the user should not wait more than Y seconds for Z

  • It can be split into user stories

  • Story 1: During use case X, the user should not wait more than 2*Y seconds for Z
  • Story 2: During use case X, the user should not wait more than Y seconds for Z
  • They can be prioritized against other stories
  • Let’s forget about performance for now and deliver functionality A as soon as ready, we’ll speed things up later.
  • Let’s fix basic performance constraints for use case X for now, every story will have to comply with these constraints later.
  • The performance on these use cases should be automatically tested and non regressed
  • If we slow things too much and these tests breaks, we’ll have to optimize the code.
  • But as long as we don’t break the tests, it’s ok to unoptimize the code !

Maybe that’s a chance to stop performance related gut feeling quarrels !

How to Setup a Weekly Fruit Basket in No Time

"A multi-fruit sandwich"

If you’re interested in agile software development, just read Kent Beck’s Extreme Programming Explained : Embrace Change 1st edition. It’s only 200 pages, it was written in 1999, but it’s still tremendously relevent, and it’s got the highest ratio of information per page on the subject.

If you actually read it, you’ll notice that Kent emphasizes about having food at the office. He claims that it improves the moral and builds team cohesion.

As my company already provides free drinks, my first attempt was to asked for weekly fresh fruits baskets. They are currently experimenting regular self service fruit baskets deliveries in some offices. Unfortunately not in ours yet. Let’s hope it changes soon. Meanwhile, we decided to handle the thing ourselves.

Here comes the fruit basket lean startup !

First, let’s setup the simplest way it could possibly work

  • Invest 10€
  • Buy some fruits from the closest shop
  • Put them in a basket next to my desk
  • Let people buy them for 50c each
  • Leave a plastic cup next to the basket to receive the money
  • Hold the accounting public and visible in your wiki for example
  • Repeat every monday

"Our fruit basket at my desk"

Then, verify that it is sustainable

It turns out it works fine !

… Until someone started to steal some money !

If we forgot to hide the money cup before we left in the evening, obvious isn’t it ? We tried the following, in that order :

  1. Setup an automatic reminder to hide the money before leaving … FAIL
  2. Setup a fake webcam and a warning notice … FAIL
  3. Only keep 1€ worth of change in the money cup, and repeatedly lock up the rest in a safe place … SUCCESS !

With just a bit of time, anyone, anywhere can setup a fresh fruit basket at work. It does improve the moral and build the team.

How to Measure Your Speed With Your Business Value ? (Lean Software Development Part 3)

There is a french idiom that basicaly is

No use to run, all that is needed is to start on time …

an agile engineer would add

… and to go in the good direction

Indeed, velocity or mean cycle time as speed measures have their shortcomings :

  • Can be falsified by story point inflation !
  • Does not tell the team or its product owner whether they are working on the right thing.

Wouldn’t it be great if we could track the business value we are creating instead ? Wouldn’t it be more motivating for developpers to know how they are contributing to the bottom line ? Wouldn’t it help various people to align inside the organization ?

How to track and prioritize based on your business value

From Reinertsen’s Flow book, we learned that the cost of delay is the main driver of the value creation : the faster you deliver a feature, less you spend the cost of delay of that feature, the more value you are creating for your company. This article suggests that the cost of delay can be computed with the following formula :

cost of delay = f(user business value) + g(time criticality) + h(risk reduction opportunity)

This other article suggests that they are different types of tasks that corresponds to the different terms of the formula above.

"Different kind of tasks"

Here is how we could link the 2 articles :

  • Stories with deadlines : either through legal or market constraints, not doing this will put you out of business (‘time criticality’ in the formula)
  • Stories that influence the bottom line : by increasing the sales figures when delivered, or decreasing them when not delivered, which is kind of the same (‘user business value’ in the formula)
  • Risk reduction tasks : by mitigating risk or streamlining the process, these tasks actually improve the bottom line of other stories (‘risk reduction opportunity’ in the formula)

The later type of task will be detailed in other articles. Let’s focus on the two others.

The case of the deadlined feature ?

First, I’d say its business value is 0, until it’s too late. You should not be working on it too soon, but you should not be working on it too late neither !

In his book The Art of Agile Development James Shore details in great details how an agile team can commit to deliverables (I really recommend this part, I might even write a post about it). He explains that in order to commit, teams should multiply their estimates by 4, or by 1.8 if they are very rigourous in their application of all the XP practices.

So a rule to handle such a task could be to

  • estimate it
  • multiply that estimate by 4
  • substract this time from the deadline
  • prioritize it so that it can be started at that date, but not earlier
  • don’t expect to be creating any value by completing these stories

What’s the business value of other typical user stories

This article suggests that in this case the cost of delay is equal to the business value of the feature for the user. But how can we have an idea of its actual user business value ?

Before actually selling and getting the money, it’s just an estimation. With the good information, some people will make better estimates than others, never the less, it’s still an estimate. Let’s try a “Business Value Poker” ! Here are a few ideas about how to conduct this:

  • Estimate business value at the same time as you estimate the complexity of a story
  • Create some business value $ poker estimate cards, write an app for this, or bring in some Poker Chips to estimate the value
  • Invite some business people (sales, marketting …) to the meeting to get real knowledge (being organized as a feature team will help)

"Hands pushing poker chips for an all-in"

At the end, when you have the estimated cost of delay and duration of every task, you can directly prioritize using the WSJF (Weighted Shortest Job First) :

WSJF = Cost of Delay / Duration

Just do the tasks by decreasing order of WSJF.

At the end of the sprint, just as we track the story points we completed with the velocity, we could track the business value we created, that would be our business value speed. If you have access to real sales numbers, it might be interesting to see if it’s possible to correlate the figures.

Afterthoughts

The more I learn about Lean principles, the more I find our current Issues Tracking Systems (I’m used to Jira) limited. They seem to be databases with a nice UI, whereas what we need are tools to help us to make better decisions out of the multitude of items … How come they do not provide something as simple as the WSJF ?

Edit 12/09/2014

I got some pretty positive feedback from practicing these business value pokers. Inviting the product owner forced him to explain in details why he believed some features were more valuable than others. On the opposite, it allowed the developpers to hightlight how some seemingly unimportant stories were critical to a long term goal. In the end, everyone, including the product owner, is asking for more. It’s a good practice that helps introducing the business value / cost of delay concept.

This was part 3 of my suite of article about Lean Software Development, Part 2 was Why eXtreme Programming works ?, Part 4 will be Measure the business value of your spikes and take high payoff risks.

Why eXtreme Programming Works ? (Lean Software Development Part 2)

I’ve been programming for quite some time now, in different teams, using various methodologies. I also had the luck to do XP for at least 3 different projects. To me the conclusion is obvious, XP delivers more. Even better, programmers working with XP seem to be happier. The only thing I’ve seen that works better than XP, is fine tunning it once the team has mastered the main principles.

"An extreme diet pill bottle"

XP was first put in place at the Chrysler C3 project for SmallTalk performance issues. After being called for performance issues, Kent Beck discovered that these were only the tip of the iceberg, everything was going astray. As the expert in the room, people started to ask him how to organize. I remember reading some time ago that without having thought about it before, he gathered all the most efficient programming techniques he knew together into a new process. XP was born.

So the question is : what did Kent Beck put in XP so that it works so well ? Let’s go through the Flow book and its 175 lean product development principles, to see if we get some explanations.

Concentric circles featuring the 12 core xp practices"

Going through the 12 core XP practices, the main Scrum ceremonies and a few common additions, I’ll try to explain why they work through the Flow book’s principles.

Whole Team

This is the same thing as Pizza Team, Feature team, Cross Functional Team. It just means put everyone involved in the creation of the product in the same room (On site customer, sales, product people, programmers, quality controllers, operation people …).

Ref The principle of … Summary page
B17 Proximity Proximity enables small batch sizes 129
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W13 Skill overlap Cross-train resources at adjacent processes 156
F24 Alternate routes Develop and maintain alternate routes around points of congestion 201
F25 Flexible resources Use flexible resources to absord variation 202
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF19 Colocation Colocation improves almost all aspects of communication 230
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D7 Alignment There is more value created with overall alignment than local execellence 252
D13 Peer-level coordination Tactical coordination should be local 257
D18 Response frequency We can’t respond faster than our (internal) response frequency 261
D22 Face-to-face communication Exploit the speed and bandwidth of face-to-face communications 263

Planning Game

The work is split into user stories. The customer then estimates the business value of each story, before the programmers poker estimate the required work for them. The game is then to maximize the scheduled business value creation for the coming iteration (1 to 3 weeks).

Ref The principle of … Summary page
E4 Economic value-added The value added by an activity is the change in the economic value of the work product 32
E7 Imperfection Even imperfect answers improve decision making 36
E9 Continuous economic tradeoffs Economic choices must be made continuously 37
E10 Perishability I Many economic choices are more valuable when made quickly 38
E14 Market I Ensure decision makers feel both cost and benefit 42
E15 Optimium decision timing Every decision has its optimum economic timing 44
Q10 Queueing discipline Queue cost is affected by the sequence in which we handle the jobs in the queue 69
Q13 Queue size control I Don’t control capacity utilization, control queue size 75
Q14 Queue size control II Don’t control the cycle time, control queue size 76
V5 Variability pooling Overall variation decreases when uncorrelated random tasks are combined 95
V6 Short-term forcasting Forecasting becomes exponentially easier at short time-horizons 96
B18 Run length Short run lengths reduce queues 130
B20 Batch content Sequence first that which adds value most cheaply 131
W1 WIP constraints Constrain WIP to control cycle time and flow 145
W3 global constraints Use global constraints for predictable and permanent bottlenecks 147
W6 Demand blocking Block all demand when WIP reaches its upper limit 151
W8 Flexible requirements Control WIP by shedding requirements 152
W19 Adaptive WIP constraints Adjust WIP constraints as capacity changes 162
F5 Periodic resynchronization Use a regular cadence to limit the accumulation of variance 177
F7 The cadence reliability Use cadence to make waiting times predictable 179
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
F18 The local priority Priorities are inherently local 196
FF10 Agility I We don’t need long planning horizons when we have a short turning radius 222
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
D4 Opportunity Adjust the plan for unplanned obstacles and opportunities 249
D14 Flexible plans Use simple modular plans 258

Small Releases

Make a lot of small releases.

Ref The principle of … Summary page
Q2 Queueing waste Queues are the root cause of the majority of economic waste in product development 56
V8 Repetition Repetition reduces variation 99
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
F8 Cadenced batch size enabling Use a regular cadence to enable small batch size 179
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
D23 Trust Trust is built through experience 264

Customer Tests

The customer assists the programmers into writing automated use case tests.

Ref The principle of … Summary page
V16 Variability displacements Move variability to the process stage where its cost is lowest 107
B17 Proximity Proximity enables small batch sizes 129
F30 Flow conditioning Reduce variability before a bottleneck 208
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF19 Colocation Colocation improves almost all aspects of communication 230
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D8 Mission Specify the end state, its purpose and the minimum possible constraints 252
D16 Early contact Make early and meaningful contact with the problem 259

Collective Code Ownership

Every programmer is responsible to evolve and maintain all the source code, and not just his part.

Ref The principle of … Summary page
Q7 Queuing structure Serve pooled demand with reliable high-capacity servers 64
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D1 Perishablility II Decentralize control for problems and opportunities that age poorly 246
D4 Virtual centralization Be able to quickly reorganize decentralized resources to create centralized power 250
D5 Inefficiency The inefficiency of decentralization (as opposed to silos) can cost less than the value of faster reponse time 251

Coding Standards

All programmers agree on coding conventions for all the source code they write.

Ref The principle of … Summary page
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202

Sustainable Pace

As the value created by a knowledge work does not increase linearly with the time spent, it’s wiser to work a number of hours that both maximizes the work done while allowing the team to keep on going forever if needed.

Ref The principle of … Summary page
E5 Inactivity Watch the work product, not the worker 33
Q3 Queueing capacity utilization Capacity utilization increases queues exponentially 59
B9 Batch size death spiral Large batches lead to even large batches 118

Metaphor

Whether an actual metaphor or an ubiquitous language, the idea is to build a shared customer oriented architecture and design of the system.

Ref The principle of … Summary page
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absord variation 202

Continuous Integration

All the code of all the team is merged, tested, packaged and deployed very frequently (many times per day)

Ref The principle of … Summary page
Q2 Queueing waste Queues are the root cause of the majority fo economic waste in product development 56
V8 Repetition Repetition reduces variation 99
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
B12 Low transaction cost Reducing the transaction cost per batch lowers overall costs 123
B16 Transport batches The most important batch is the transport batch 128
B19 Infrastructure Good infrastructure enables small batches 130
F29 Resource centralization Correctly managed, centralized resources can reduce queues 206
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF11 Batch size feedback Small batches yield fast feedback 223
FF16 Multiple control loops Embed fast control loops inside slow loops 228
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232

Test Driven Development

Programmers write failing tests (both customer and unit tests) before actual real code

Ref The principle of … Summary page
V15 Iteration speed it is usually better to improve iteration speed than defect rate 106
V16 Variability displacements move variability to the process stage where its cost is lowest 107
B1-8 Batch size Reducing batch size reduces cycle time, variability in flow, risk, overhead and accelerates feedback, while large batches reduces efficiency, lower motivation and urgency and cause exponential cost and schedule growth 112-117
F30 Flow conditioning Reduce variability before a bottleneck 208
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 The fast-learning principle Use fast feedback to make learning faster and more efficient 220
FF11 The batch size principle of feedback Small batches yield fast feedback 223
FF14 The locality principle of feedback Whenever possible, make the feedback local 226
FF16 The principle of multiple control loops Embed fast control loops inside slow loops 228
FF20 The empowerment principle of feedback Fast feedback gives a sense of control 231

Refactoring

Programmers improve the design of the system continuously, meaning in very frequent baby steps. This removes the need for a big design up front.

Ref The principle of … Summary page
E13 The first decision rule principle Use decision rules to decentralize economic control 41
V9 The reuse principle Reuse reduces variability 100
B9 The batch size death spiral principle Large batches lead to even large batches 118
B19 The infrastructure principle Good infrastructure enables small batches 130
F28 The principle of preplanned flexibility For fast responses, preplan and invest in flexibility 205
D12 The second agility principle Develop the ability to quickly shift focus 255

Simple Design

Do the simplest thing that could possibly work. No need to write things that don’t add business value yet. (Note that simple does not mean easy)

Ref The principle of … Summary page
E19 Insurance Don’t pay more for insurance than the expected loss 49
V12 Variability consequences Reducing consequences is usually the best way to reduce the cost of variability 103
B9 Batch size death spiral Large batches lead to even large batches 118
B15 Fluidity Loose coupling between product subsystems enables small batches 126
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
F25 Flexible resources Use flexible resources to absorb variation 202
D12 Agility II Develop the ability to quickly shift focus 255

Pair Programming

Programmers sit at the same computer in pairs to write code. One write the code, and the other comments. The keyboard changes hands very frequently.

Ref The principle of … Summary page
B13 Batch size diseconomies Batch size reduction saves much more than you think 124
B21 Batch size I Reduce the batch size before you attack bottlenecks 133
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W13 Skill overlap Cross-train resources at adjacent processes 156
F25 Flexible resources Use flexible resources to absord variation 202
F30 Flow conditioning Reduce variability before a bottleneck 208
FF14 Locality of feedback Whenever possible, make the feedback local 226
FF16 Multiple control loops Embed fast control loops inside slow loops 228
FF19 Colocation Colocation improves almost all aspects of communication 230
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D13 Peer-level coordination Tactical coordination should be local 257
D22 Face-to-face communication Exploit the speed and bandwidth of face-to-face communications 263

Spikes

Programmers conduct time boxed experiment to gain insights

Ref The principle of … Summary page
V2 Asymmetric payoffs Payoff asymmetries enable variability to create economic value 88
V7 Small experiments Many small experiments produce less variation than one big one 98

Slack Time

Keep some buffer time at the end of the iteration where team members can either close the remaining stories or work on improvements.

Ref The principle of … Summary page
V11 Buffer Buffers trade money for variability reduction 101
B9 Batch size death spiral Large batches lead to even large batches 118
B19 Infrastructure Good infrastructure enables small batches 130
F6 Ccadence capacity margin Provide sufficient capacity margin to enable cadence 178
D12 Agility II Develop the ability to quickly shift focus 255
D15 Tactical reserves Decentralize a portion of reserves 258

Daily Stand Up Meeting

The whole team starts every working day by a quick synchronization meeting

Ref The principle of … Summary page
B3 Batch size feedback Reducing batch size accelerate feedback 113
W12 T-Shaped resources Develop people who are deep in one area and broad in many 155
W20 Expansion control Prevent uncontrolled expansion of work 163
F5 Periodic resynchronization Use a regular cadence to limit the accumulation of variance 177
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180

Retrospective meeting

At the end of every iteration, the team meets for a retrospective, discussing what they did in order to improve

Ref The principle of … Summary page
B9 Batch size death spiral Large batches lead to even large batches 118
B19 Infrastructure Good infrastructure enables small batches 130
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
D21 Regenerative initiative Cultivating initiative enables us to use initiative 263

Demos

At the end of every iteration, the team demonstrates what it did to the customer

Ref The principle of … Summary page
E14 Market I Ensure decision makers feel both cost and benefit 42
B3 Batch size feedback Reducing batch size accelerate feedback 113
B9 Batch size death spiral Large batches lead to even large batches 118
F9 Cadenced meetings Schedule frequent meetings using a predictable cadence 180
FF7 Queue reduction by feedback Fast feedback enables smaller queues 220
FF8 Fast-learning Use fast feedback to make learning faster and more efficient 220
FF20 Empowerment by feedback Fast feedback gives a sense of control 231
FF21 Hurry-up-and-wait Large queues make it hard to create urgency 232
FF23 Overlapping measurement To align behaviors, reward people for the work of others 233
D23 Trust Trust is built through experience 264

Visual Whiteboard

Display the stories of the current sprint on the wall in a 3 columns whiteboard (TODO, DOING, DONE)

Ref The principle of … Summary page
W23 Visual WIP Make WIP continuously visible 166
F27 Local transparency Make tasks and resources reciprocally visible at adjacent processes 204
D17 Decentralized information For decentralized decisions, disseminate key information widely 260

Conclusion

Whaoo that’s a lot ! I did not expect to find so many principles underlying XP (I even removed principles that were not self explanatory). For the XP practitioner that I am, writing this blog post helped me to deepen understanding of it. As XPers know, XP is quite opiniated, it’s both a strength and a weakness if you try to apply it outside of its zone of comfort. This explains why some lean subjects are simply not addressed by XP.

To summarize, here is where XP hits the ground :

  • In spite of its image of ‘a process for nerdy programmers’ XP turns out to be a quite evolved lean method !
  • XP anihilates batch size and feedback time
  • Pair programming is well explained

And here is where to look at when you’ll need to upgrade XP

  • Better tradeoffs might be found with a real quantitative economical framework
  • Synchronization principles might help working with other teams

Kent Beck could not have read the Flow book when he invented XP, but it seems he just had a bit of advance on the rest of us …

This was part 2 of my suite of article about Lean Software Development, Part 1 was The Flow book summary, Part 3 will be How to measure your speed with your business value ?

The Flow Book Summary (Lean Software Development Part 1)

A few weeks ago, I read The principles of product development flow from Donald G. Reinertsen.

I read it both for work and for my side projects, and I think it will be useful for both. The book is about lean product development, and is in fact a collection of 175 lean principles that one can study and understand in order to make better decisions when developing new products. The principles are divided into the following 8 categories

  1. Economics
  2. Queues
  3. Variability
  4. Batch sise
  5. WIP constraints
  6. Cadence, synchronization and flow control
  7. Fast feedback
  8. Decentralized control

I really loved the book. I have not been thrilled like that by a book since Kent Beck’s 1st edition of Extreme Programming Explained. Where Kent Beck described some values, principles and practices that work. D.G. Reinertsen has the ambition to help us to quantify these practices in order not move from belief based to fact based decisions. For this, he gives us the keys to creating an economical framework with which we should be able to convert any change option to its economical cost

The cover of the book "Extreme Programming Explained"

Lately, I’ve been thinking of an economical framework of my own, that I could use on the projects I am currently involved in. This post is the first of a series about this :

  1. The Flow book summary
  2. Why eXtreme Programming works ?
  3. How to measure your speed with your business value ?
  4. Measure the business value of your spikes and take high payoff risks
  5. Measure the value of the lean startup ‘learning’
  6. Prioritizing technical improvements
  7. Summing it up for my next side project

Next part will feature an explanation of the XP practices with the lean principles. Stay tuned.

The Holy Code Antipattern

As I’ve encountered this situation in different disguise in different companies, I now assume it’s a widely applied antipattern.

Context

A team of programmers inherits a piece of code from one of their bosses. They find it really difficult to maintain : it is difficult to understand, fix, and change.

The Antipattern

As this piece of code seems too complex to be maintained by a team of simple programmers, as the boss, just forbid them :

  • to refactor any part of it
  • to rewrite it from scratch
  • to use something else instead

Consequences

  • This often limits the number of bugs that appear in this library, but …
  • It slows down development, because of the micro management required to enforce this pattern
  • It frustrates programmers, and it is likely that the best ones will leave
  • It prevents better design
  • Even worse, in the long run, it prevents great domain driven design to emerge from merciless refactoring
  • In the end, it makes the whole organization less performant

Examples

  • Your boss wrote something a few years ago, if the domain is more or less complex, the resulting code is complicated. The subject eventually got the reputation of being ‘touchy’. Your boss is the only person who effectively manages to change anything in there. He’s a bit afraid that by trying to improve it, the whole thing might just break down and become a bug nest. So, now that he has some authority, he forbids anyone to touch it. If a change is finally required, he’ll micro manage it !

  • Your big boss spent some over time on writing an uber-meta-generic-engine to solve the universe and everything. After seeing many developpers fixing the same kind of bugs over and over, he decides that it’s time to remove the dust from his compiler and that he starts building something that should solve the root cause of all these. In the spirit of the second system effect, he adds all bells and whistle to his beloved project, trying to incorporate a solution to every different issue he has seen during the last decade. This code grows and grows in total isolation of any real working software. When he eventually thinks it is ready, he justs drops the whole thing to your team, that is now responsible to integrate and use this thing in the running system. He’s micro managing the whole thing, and you don’t have any choice but to comply and succeed. This usually generates gazillions of bugs, makes projects really late and ruins the developpers’ lives.

Alternatives

  • Use collective code ownership so that knowledge about the code is shared by design
  • Trust programmers to design and architecture the system
  • Use constant refactoring to let tailor made domain driven designs emerge from the system