How We Used the Improvement Kata to Gain 25% of Productivity - Part 3

This is the third post on a series of 5 about the improvement kata. If you haven’t read the beginning of the story, I recommend you start from part 1.

In the previous post, I explained how we started to understand what was going on. We were now questioning our way of handling bugs.

Are we spending too much time on bugs ?

Bugs drawn on top of code

More understanding

Types of tasks

To answer this question, we decided to plot the different types of tasks we had completed per sprint.

Bar chart with the types of tasks over sprints

Think again of the velocity curve we started with. We see an almost exact correlation between story count (green bars above) and story points (blue curve below).

💡#NoEstimates works

Velocity graph

We can also see that after sprint 56, we were spending more time on bugs and improvements. Improvements are supposed to improve productivity, so we decided to focus on bugs first. Here is what we get if we compute the average number of bugs per sprint :

Periods Sprints Bugs Average bugs fixed per sprint
2015, Before sprint 56 15 21 1.4
After sprint 56 34 210 6.1

Starting sprint 56, we were fixing 4 times as many bugs as we used to do before !

What is going on with bugs ?

At this point, we felt we’d made a great step forward in our understanding. We almost thought we were done with it …

After a bit of thinking though, it was clear that we were not ! We still needed to understand why we were in this situation.

We started by listing more questions :

  • Could it be that we just got a lot better at testing ? Since sprint 56, we had been doing regular exploratory testing. Exploratory testing sessions were very effective at finding bugs.
  • Were we paying back a bug debt ? The created versus resolved trend seemed to show so. But it could also be that we weren’t testing as well as we used to !

Created vs Resolved Bugs graph

  • If we were paying back a bug debt, how close were we to the end of the payback ?
  • Were we creating too many flaws in the software ?
  • Are we fixing too many bugs ? If so, what should we do to fix less ?
  • Are the bugs coming from other teams using our component or from our own testing ?
  • Are bugs on new or old code ?

A lot of questions, all difficult to answer. We decided to first see if we were paying back a bug debt. If this was the case, most other questions would become more or less irrelevant. With a bit of thinking, we came up with a measure to get the answer.

Are we paying back a bug debt ?

We first started to do exploratory testing at sprint 56. To do this, we would run a 1 hour session, where the pair finding the more bugs would win fruits. (Later on, we streamlined exploratory testing as part of the workflow for every story) At that time, we used to find more than 10 bugs in 1 hour.

💡Gamification transforms nice developers into berserk testers !

Explo Test Sesssion 61 62 63 64 66 16.01
Bugs found 16 6 16 10 11 11

We would do another such a session. If we found significantly less than 10 bugs, let’s say less than 6, it would mean that :

  • we improved the quality of our software
  • our streamlining of exploratory testing works
  • if we continue to search and fix bugs as we do currently, we’ll reach a point where we won’t find any more bugs

Otherwise, none of these stand, and we’ll have to continue our investigations.

So we did a 1 hour, fruit-powered, exploratory testing session. And we found only 3 bugs ! Indeed, we were paying back a bug debt. The question became

When should payback be over ?

A linear regression on the created vs resolved bug trend showed that we still had 15 more months to go !

Bug trend graph

Target condition

At that point, the target condition became obvious :

We’d like to be done with bugs within 3 months.

Currently, around 1 pair (25% of the team) was busy fixing bugs. If we’d manage to bring this down, we’d have a 25% productivity boost.

This was post 3 in a series of 5 about the improvement kata. Next post will be about PDCA.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 2

In my previous post, I described the productivity issue our team was facing. How retrospectives did not work, and how I started looking at the improvement kata.

We had gone through the first phase of the improvement kata : set the end goal.

Generating enough profit for the business while sticking to a sustainable pace.

Time to start the second phase : Understand.

Drawing of a question mark transforming into a light bulb

Understand

Where we really slower ? Or was it an illusion ?

When trying to understand, you have to start with the data you have. You continue digging until you get a deeper understanding of the situation.

Velocity

We started with available data : story points and velocity. For sure this is a pretty bad measure of productivity. (Note : we should never use velocity for performance appraisal) In our case though, it felt like a good starting proxy measure.

Here is our velocity curve over 2 years.

Velocity graph

It’s obvious that something changed. There are 2 parts to this graph. The velocity dropped between sprint 54 and 16.01. That’s a clue that our gut feeling was not completely wrong. Our productivity did change.

Man days

Our first hypothesis was that team members turnover was the cause. As with any team, some people came, and some people left. Let’s superpose the man days and velocity curves.

Velocity vs Manpower graph

That could only explain part of the problem !

We tried to fine tune the man days curve. We took people’s involvement in tasks outside of programming into account. We used coefficients depending on the developers’ experience. That did not provide a better explanation.

We had to find another explanation.

Velocity computation

As I said earlier, velocity is not a measure of productivity. Any change in the way we were computing velocity would impact this curve.

We had kept photos and Trello boards of our retrospectives meetings. We searched them through for anything that could impact velocity. Here is what we found :

  • At sprint 55, we decided to ditch the focus-factor
  • At sprint 61, we started to do regular exploratory-testing. Exploratory testing discovers more bugs on top of user reported bugs. This made us spend more time on fixing bugs.
  • At sprint 62, as we opted for a No Bug policy we decided not to count story points for bugs

💡Keep Photos and Trello boards of Retrospectives as a log of your working agreements changes

The timings almost perfectly matched what we had observed in the first place. The question that came to our minds was :

Are we spending too much time on bugs ?

Halfway through understanding

This is how we started to dig into our situation. It’s a good time to give you a bit of feedback about how we felt at that point.

It was the first time we tried the improvement kata. More than that, we did not find any tutorial or guides about how to run it. The only instructions we had were theoretical descriptions or super concrete examples. We had to bridge the gap and come up with our own way.

To summarize, we felt a bit lost, we had gathered data from here and there, and we did not know what to look at next. On top of that, the quality of the data we were collecting was not great. We were wondering if we would get anything out of these investigations.

The cover of the book 'The First 20 Hours'

It felt a bit like when I did the 20 hours experiment to learn anything. We did exactly what had worked with the learning experiment : we pushed through !

💡If you feel lost when doing something for the first time. Push through !

In next week’s post, I’ll continue to detail the ‘understand’ phase. The series also gained an extra post, and will now be 5 posts long.

More to read next week.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 1

If you are serious about continuous improvement, you should learn the improvement kata.

Retrospectives are great to pick all the low hanging improvements. Once you’ve caught up with the industry’s best practices, retrospectives risk drying up. Untapped improvement opportunities likely still exist in your specific context. The improvement kata can find those.

Low and high hangling fruits on a tree

Here is how we applied the improvement kata to gain 25% of productivity in my previous team.

The Situation

Thanks to repeated retrospectives, the team had been improving for 2 years. Retrospectives seemed like a silver bullet. We would discuss the current problems, grasp an underlying cause and pick a best practice. Most of the time, that fixed the problem.

Sometimes it did not work though. Even if the issue came back in a later retrospective, it would not survive a second scrutiny. In the previous two years, the team had transformed the way it worked. It adopted TDD, incremental refactoring, pair programming, remote work, automated performance testing and many others.

Lately though, things did not work so well. The team was struggling with productivity issues. The team was not slowing down, but the scope and complexity of the product had grown. Features were not getting out of the door as fast as they used to. We had the habit of prioritizing improvements and bug fixes over features. That used to improve the flow enough to get more and more feature done. It did not seem to work anymore.

We tried to tackle the issue in retrospectives. We would change the way we prioritized features … To be later bogged down by bugs, technical debt or bad tools. We would discuss that in retrospective, and decide to change the priorities again … The loop went on and on a few times.

We were getting nowhere.

The improvement kata 

That’s why I started to look for other ways to do continuous improvement. I stumbled upon a book called Small Lean Management Guide for Agile Teams. The book is in french, but I wrote an english review. I fell in love with the way the authors dug into the hard data of how they worked to understand and fix their problems.

To learn more about this technique, I read Toyota Kata. It details two management tools used at Toyota : the improvement and coaching katas. Some say these are Toyota’s special weapon. The thing that grew them from a small shop to the largest car manufacturer in the world.

They are katas because they are routines. They must be re-execute many times. The improvement kata should improve the flow of work. The coaching kata helps someone (or yourself) to learn the improvement kata. Every time we go through the kata, we also understand it better.

Here is how the improvement kata goes :

  1. Describe your end goal
  2. Understand where you stand about this goal by measuring facts and data
  3. Based on your end goal and the current situation, define where you’d like to be in 3 months or less
  4. Use Plan-Do-Check-Act to experiment your way to this new situation
    1. Plan an experiment
    2. Do this experiment
    3. Check the results of this experiment
    4. Act on these results. 
      • Either drop the experiment and plan a new one (go back to ‘Plan’).
      • Or spread the change at a larger scale.

The Improvement Kata Pattern

Image from Mike Rother on Toyota Kata Website

The coaching kata is a way to coach someone into applying the improvement kata. The fun thing is that you can coach yourself ! The idea is to ask questions to the coachee to remind him of where he stands in his improvement kata.

The Coaching Kata Questions

You’ll find tons of details and material about these katas on the Toyota Kata website.

Our end goal

That’s how I started to apply the improvement kata in my team. I already had an idea of our end goal : to be more productive. To be more precise :

Generating enough profit for the business while sticking to a sustainable pace.

Retrospectives could not get us there. Would the improvement kata would succeed ?

This is the first part of a series of 4 posts relating our first use of the improvement kata. In the next post, I’ll explain what we did to understand the current situation.

Throwing Code Away Frequently

Here is the main feedback I got about my previous post eXtreme eXtreme Programming.

What do you actually mean by throwing code away ? Does it mean stoping unit testing and refactoring ?

A drawing of a shredder destroying some code

So I guess it deserves a bit of explanation.

What is it all about ?

When programming, I don’t throw code away a lot. I tend to rely on my automated tests to change the code I already have. That might be a problem.

As with everything, there is no one size fits all. We should choose the best practice for our current situation. Same thing applies for incremental changes versus rewriting.

TDD makes incremental changes cost effective, and as such, is a key to get out of the Waterfall.

The idea of throwing code away frequently is to make rewriting cost effective, so we can do it more often.

Why would we want to do so ?

In “When Understanding means Rewriting”, Jeff Atwood explains that reading code can be more difficult than writing it. There is a point where rewriting is going to be cheaper than changing.

The more unit test you have, the later you reach that point. The more technical debt you take, and the sooner. The bigger the part to rewrite, the more risky it becomes.

Let’s imagine you knew a safe way to rewrite the module you are working on. You could be faster by taking more technical debt and writing less unit tests ! Mike Cavaliere framed it as
F**k Going Green: Throw Away Your Code.

This would be great for new features, that might be removed if they don’t bring any value. It would also be a good way to get rid of technical debt. Naresh Jain also makes the point that without tests, you’ll have to keep things simple (here and here) !

Wait a minute, isn’t that cowboy coding ?

How to make it work

A graph with all the practices from my previous article eXtreme eXtreme Programming

How to ensure quality without unit tests ?

TDD and unit testing is a cornerstone of XP. If we remove it, we need something else to build quality in. Could Mob Programming and Remote teams do the trick ?

“Given enough eyeballs, all bugs are shallow”. Pair programming and code reviews catch a lot more bugs than solo programming. Only a few bugs are likely to pass through the scrutiny of the whole team doing mob programming.

What about Remote ? Martin Fowler explains that remote teams perform better by hiring the best. Skills of programmers have been known for a long time as a main driver of software quality.

People vs methodology impact on productivity

Photo from Steve McConnell on Construx

Finally, the Cucumber team reported that Mob Programming works well for remote teams.

How to make this safer ?

Even with the best team, mistakes will happen. How can we avoid pushing a rewrite that crashes the whole system ?

The main answer to that is good continuous deployment. We should deploy to a subset of users first. We should be able to rollback straight away if things don’t work as expected.

As the system grows, microservices can keep continuous deployment under control. We can deploy, update and roll them back independently. By nature, microservices also reduce the scope of the rewrite. That alone, as we said earlier, makes rewrites safer.

As a last point, some technologies make building microservice systems easier and incremental. The Erlang VM, for example, with first class actors, is one these. Time to give Erlang and Elixir a try !

Is this always a good idea ?

There are good and bad situations.

For example, a lean startup or data driven environment seems a good fit. Suppose your team measures the adoption of new features before deciding to keep or ditch them. You’d better not invest in unit tests straight away.

On the other side, software for a complex domain will be very difficult to rewrite flawlessly. I have worked in the finance industry for some years, I know what a complex domain is. I doubt I could rewrite a piece of intricate finance logic without bugs. I would stick to DDD and unit testing in these areas.

How to dip your toe

Here is how I intend to dip my toe. I won’t drop automated testing completely yet. What I could do instead (and that I already did on side projects) is to start with end to end tests only.

From there every time I want to change a piece of code, I have many options :

  • Add a new end to end test and change the code.
  • If the code is too much of a mess, I can rewrite it from scratch. I’ll have the safety of the end to end tests.
  • If I see that the module is stabilizing, has not been rewritten for a while, and yields well to modifications. I could start splitting end to end tests into unit tests, and embark on the TDD road.

Maybe later, when I have a team doing mob programming to find my mistakes, we’ll skip the end to end tests.

Other interesting links

eXtreme eXtreme Programming (2017)

What would XP look like if it was invented today ?

A few days ago, I stumbled upon these two talks that got me thinking about this question.

So I looked deeper into the question, while sticking to the same values and principles. Here is what I came up with.

Practices

Continuous Deployment

Why should we only deliver at every iteration in 2017 ? Lot’s of companies demonstrated how to deploy every commit to safely production. Amazon for example, deploys new software every second !

DevOps++

As a team starts doing continuous deployment, devs get more and more involved in ops. This privileged view on the users’s behaviour can turn devs into product experts. Why not push the logic further and make them the product guys as well ?

Test in prod

Continuous deployement opens up many opportunities. Deploying safely requires bulletproof rollbacks. Once devs take care of product, code and ops they can shortcut testing and directly test in prod with some users ! Rollback remains an option at any time.

#NoBugs

That seems like wishful thinking. The idea is to fix bugs as soon as they appear, but also to prevent them in the first place. Continuous Deployment requires great engineering practices, which enable this way of working. A story cannot be “finished” until test in prod is over, and bugs fixed.

Kanban

At its core, Kanban is a way to limit the work in progress. It’s almost a side effect of the previous practices. #noBugs almost kills interruptions for rework. On top of that, devs have full control on their end to end work, so why would they multitask ?

#NoBacklog

In Getting Real, basecamp guys said that their default answer to any new feature request was “No”. Only after the same thing got asked many times would they start thinking of it. Maintaining a long backlog is a waste of time. Dumping backlog items but the current ones saves time. Important ideas will always come up again later.

#NoEstimates

This one is famous already. Some teams have saved time by using story count instead of story points. What’s the point anyway if the team is already :

  • working as best as it can
  • on the most important thing
  • and deploying as soon as possible

Data Driven

This is how to drive the system evolution. Instead of relying on projects, backlogs and predictions, use data. Data from user logs and interviews proves if a new feature is worth building or dumping. Exploring logs of internal tools can help to continuous improve.

Lean startup & Hackathons

Incremental improvements, in either product or organization, is not enough. As Tim Hardford explained in Adapt, survival requires testing completely new ideas. Lean startup & hackathons experiments can do just that.

The cover of the Adapt book

Improvement kata

Improvement kata is a way to drive long term continuous improvement. It’s the main tool for that at Toyota (read Toyota Kata to learn more). It provides more time to think of the problem than a retrospective. It also fits perfectly in a data driven environment.

Mob programming

Pair programming is great for code quality and knowledge sharing. Mob programming is the more extreme version of pairing where all the team codes together.

Throw code away frequently

An alternative to refactoring with unit tests is throwaway and rewrite once it gets too bad. Companies have been doing that for years. I worked at a bank that used to throwaway and rewrite individual apps that were part of a larger system. It can be a huge waste of money if these sub systems are too large. Scaling down to individual classes or microservices could make this cost effective.

Remote

With access to a wider pool of talents, remote teams usually perform better. It also makes everybody’s life easier. Finally, teams have reported that mob & remote programming work great together.

Afterthought

What’s striking from this list is that it’s not that far from the original XP ! For example, continuous deployment and generalists have always been part of it. Another point is that is not science fiction ! I found many teams reporting success with these practices on the internet ! A well-oiled XP team might very well get there through continuous improvement.

The more I look at it, the more XP stands as a unique lighthouse in the foggy Agile landscape.

As for me, I’m not sure I’d dare to swap TDD & refactoring for throwaway & rewrite. I’m particularly worried about the complex domain specific heart of systems. Nothing prevents from using both strategies for different modules though.

I should give it a try on a side project with a microservice friendly language, such as Erlang or Elixir.

5 Remote Energizer Tips That Will Make Your Remote Retrospectives Rock

Do you remember how people who are not used to the phone tend to shout in it, for the message to get far ? Read on and I’ll explain how this silly habit will make your remote retrospectives great !

A typical retrospective starts with an energizing activity, or energizer. It’s important for two reasons. First, people who don’t speak during the first 5 minutes of a meeting are more likely to remain silent until the end. Second, getting everyone to do an energizing and fun activity sets the tone for a peaceful and creative time.

Remote control + magnifying glass

Our experiences with remote energizers

When we started to do retrospectives at work, all the team was co located in Paris. There are tons of activities available on the internet to run effective energizers. We could do games like Fizz Buzz, or drawing based activities like face drawing and visual phone. It was easy and fun.

A few years ago, Ahmad Atwi joined our team from Beirut. Our catalog of energizer shrank to a few activities that we could run remotely. On top of that, going through the remote medium made it more challenging for energizers to … energize ! With time and trial, we managed to understand what works and how to pick the right energizer for a remote team.

Principles for remote energizers

We have an Agile Special Interest Group at Murex, where volunteers meet to share things they find interesting. A few weeks ago, during one of these sessions, we discussed remote energizers in a Lean Coffee.

Here are the points we came up with.

The Agile Retrospectives, making good teams great book cover

  • If they are enough teammates at every place, energizers that play in small groups will work well. For example, it would be easy to organize a balloon battle or a back to back.

The Agile Retrospectives, making good teams great book cover

  • It’s also easy to use variations on an activity that proved effective. For example, explore new kinds of questions. It’s even ok to repeat verbatim an activity from time to time.
  • Replace energizing by team building. Team building is particularly important for remote teams. Instead of engaging activities, it’s ok to have everyone share a personal anecdote. By knowing each other better, the team can build trust. For example, you could introduce such an activity with : “What book would you bring on a desert island ? Why ?”
  • One last thing we came up with my colleague Morgan Kobeissi to energize a remote meeting is to YELL. The idea is to ask everyone to answer a question while standing and yelling. A question could be “How long have you been working and what companies did you work for ?”

Someone yelling in a kid's 'can-phone'

Remote work is here to stay. More and more teams are facing similar difficulties. We need to invent new work practices. If you discovered new ways to run remote energizer, please share them with a comment.

Forget Unit Tests, Only Fast Tests Matter

Don’t worry if your unit tests go to the DB, that might not be so bad.

When I started writing unit tests, I did not know what these were. I read the definition, and strived to follow the recommandations :

  • they should be independent from each other
  • they should not access the DB
  • they should not use the network
  • they should only cover a small scope of your code

I started to write unit tests on my own and became test infected pretty fast. Once I got convinced of the benefits of unit testing, I tried to spread the practice around me. I used to explain to people that it is very important to write real unit tests by the book. Otherwise, Bad Things would happen …

How I changed my mind

A few years ago, I spent a few years working on a Rails side project called mes-courses.fr. I was using a small test gem to enforce that no unit tests were accessing the db. I had to write a lot of mocks around the code. I ended up hating mocks : they are too painful to maintain and provide a false sense of security. I’m not alone in this camp, check DHH’s keynote at RailsConf 2014.

At some point, the mock pain got so bad that I stopped all developments until I found another way. I found a pretty simple workaround : use in-memory SQLite. I got rid of all the DB access mocks. Not only were the tests easier to write and maintain, but they were as fast as before, and they covered more code.

That changed something fundamental in my understanding of testing

It’s all about speed baby

The only thing that makes unit tests so important is that they run fast.

Unit tests as described in the literature run fast. Let’s see what happens when you remove one of the recommandations for unit tests.

  • If tests depend on each other, their outcome will change with the execution order. This wastes our time in analyzing the results. On top of that, independent unit tests are easy to run in parallel, providing an extra speedup. We lose this potential when our tests are dependent.
  • Tests that rely on an out-of-process DB run slower. Tests need to start the DB before anything else. Data needs to be setup and cleaned at every test. Accessing the DB implies using the network, which takes time as well. There’s also a risk of making the tests dependent by sharing the same DB. A last issue is troubleshooting the DB process when things don’t work.
  • Tests that use the network are slow too ! First, Network is slower than memory. Second, data serialization between processes is slow as well. Finally, these tests are likely to use some form of sleep or polling, which is slow, fragile, or both !
  • Finally, there is always a scope past which a test will be too slow.

This means that not only unit tests are fast, but also that fast tests usually show the features of unit tests.

My guess is that ‘unit tests’ were explicitly defined as a recipe for fast tests ! If you stick to the definition of unit tests, you’ll get fast tests and all their benefits.

A speedometer

Fast tests

That also means that we should focus first on having fast tests rather than unit tests. Here is my real check to know if tests are fast enough :

  • Is the build (and the tests and everything) less than 10 minutes ?
  • Can I continuously run my tests while coding and stay in the flow ?

If both answers are yes, then I won’t question myself too much whether my tests are unit, integration or end to end.

So what ?

I’ve been experimenting with these heuristics for some time. Side projects are great for experimenting since you don’t have a team to convince ! Here are my main takeaways :

  • Stick to end to end tests at the beginning of your project. They are easy to refactor to finer grained tests later on.
  • In-memory DBs are great to speed tests up without wasting your time with mocking. We can use a unique DB for every test to keep them independent.
  • Large scope tests are not an issue provided 2 things.
    1. The code contains very few side effects.
    2. It provides good exceptions and assertions messages

On the other side, there are things that I still recommend :

  • Independent tests are easy to write from the beginning, difficult to fix later on. As they save a lot of headaches in diagnostic, I stick to them from the start.
  • Avoid network, it makes the tests slow, fragile and tricky to diagnostic. But please, read this before jumping to mocks.

These rules have served me well, particularly in my side projects, where I don’t have a lot of time. What about you ? Do you have your own testing rules ?

10 Things to Know That Will Make You Great at Refactoring Legacy Code

We write tons of legacy code everyday. Experienced developers understand that legacy code is not something special. Legacy code is our daily bread and butter.

Should we abandon all hope as we enter legacy code ? Would that be professional ? In the end, code is only a bunch of bytes, somewhere on a drive. We are the software professionals. We need to deal with that.

Keep Calm and Take The Power Back

1. Master non legacy refactoring first

Please calm down before this “Bring ‘em out” energy goes to your head.

I did not say that refactoring legacy code is easy. Legacy code can bite … bad. I’ve been in teams which literally spent nights fixing a bad refactoring gone to production …

Before you can refactor legacy code, you need to be good at refactoring new code. We all learned to swim in the shallow pool, it’s the same with refactoring. Mastering green code refactoring will help you when tackling legacy code.

First, you’ll know the ideal you’d like to get to. Knowing how productive a fast feedback loop is will motivate you to keep on refactoring.

Second, you’ll have a better idea of the baby steps to take you through a tricky refactoring.

If you are not yet at ease with greenfield refactoring, have a look at my previous post.

2. Understand that refactoring legacy code is different

The next thing to remember is that refactoring legacy code is different. Let’s assume Michael Feather’s definition of legacy code : “Code without tests”. Getting rid of legacy code means adding automated tests.

Unfortunately, trying to force push unit tests in legacy code usually results in a mess. It introduces lot’s of artificial mocks in a meaningless design. It also creates brittle and unmaintainable tests. More harm than good. This might be an intermediate step, but it is usually not the quickest way to master your legacy code beast.

Here are alternatives I prefer.

3. Divide and conquer

This is the most straightforward way to deal with legacy code. It’s an iterative process to repeat until you get things under control. Here is how it goes :

(1) Rely on the tests you have, (2) to refactor enough, (3) to test sub-parts in isolation. (4) Repeat until you are happy with the speed of the feedback loop.

Depending on the initial state of your tests, this might take more or less time. Your first tests might even be manual. This is the bulldozer of refactoring. Very effective, but slow.

Bulldozer

4. Pair or mob program

Given enough eyeballs, all bugs are shallow.

Linus’s Law

Changing legacy code is a lot easier when you team up. First, it creates a motivating “we’re all in this together” mindset. Second, it guards us against silly mistakes.

Mob programming, might seem very expensive, so let me explain why it is not. Suppose you want to introduce some tests in a tricky section of code.

With mob programming, all the team gathers for half a day to work on this change. Together, they find and avoid most of the pitfalls. They commit a high quality change, which creates only one bug down the road.

Let’s see the alternative.

Using solo programming, a poor programmer tries to tackle the change all by himself. He spends a few days to understand and double check all the traps he can think of. Finally, he commits his change, which results in many bugs later on. Every time a bug pops up, it interrupts someone to fix it ASAP.

The savings in interruptions are greater than up front cost of mob or pair programming. 

5. Seams

A software seam is a place where you can alter behavior in your program without editing in that place.

Michael Feathers

This is one of the many interesting things I learned from Michael’s book about legacy code.

Cover of Working Effectively with Legacy Code

Object polymorphism is only one kind of seam. Depending on your language, many other types of seams can be available. 

  • Type seam for generic languages
  • Static link seam for static libraries
  • Dynamic link seam for dynamic libraries

Finding seams in your program is something opportunistic. Keep in mind though that testing through seams is not the end goal. It is only a step to bootstrap the test-refactor loop and start your refactoring journey.

6. Mikado Method

How do you get to your end then ? How to you refactor only what’s useful for your features ? How do you do large refactorings in baby steps ?

Over time, I found that the mikado method is a good answer to all these issues. The goal of the Mikado Method is to build a graph of dependent refactoring. It can then use it to perform all these refactorings one by one. Here is the mikado method by the book.

Before anything else, you’ll need a large sheet of paper to draw the graph. Then repeat the following :

  1. try to do the change you want
  2. If it builds and the tests pass, great, commit and you’re done
  3. Otherwise, add a node for the change you wanted to do in your mikado graph
  4. Write down the compilation and test errors 
  5. Revert your change
  6. Recurse from 1 for every compilation or test error
  7. Draw a dependency arrow from the nodes of errors to the node of your initial change

Once you built the full graph, tackle the refactorings from the leaves. As leafs have no dependencies, it should be easy to do and commit them.

A Sample Mikado Graph

When I first read about the mikado method, it seemed very simple and powerful. Things got more complex when I tried to apply it. For example, the fact that some changes don’t compile hide future test failures. That means that very often, the “Build the graph” and “Walk the graph” phases overlap. In real life, the graph evolves and changes over time. 

My advice about the Mikado Method is not to take it to the letter. It’s a fantastic communication tool. It helps not to get lost and to avoid a refactoring tunnel. It also helps to tackle refactoring as a team.

It is not a strict algorithm though. Build and tests are not the only way to build the graph. Very often, a bit of thinking and expert knowledge are the best tools at hand.

Cover of The Mikado Method book

7. Bubble Context

Refactoring needs to be opportunistic. Sometimes there are shortcuts in your refactoring path.

If you have access to a domain expert, the Bubble Context will cut the amount of refactoring to do. It’s also an occasion to get rid of all the features that are in your software but that are not required anymore. 

The Bubble Context originated from the DDD community, as a way to grow a domain in an existing code base. It goes like that :

  1. Find a domain expert
  2. (Re)write clean code for a very tiny sub domain
  3. Protect it from the outside with an anticorruption layer
  4. Grow it little by little

I have friends who are fans of the bubble context. It is super effective provided you have a domain expert. It is a method of choice in complex domain software.

8. Strangler

Bubble Context works great when refactoring domain specific code, what about the rest ? I had good results with the Strangler pattern.

For example, we had to refactor a rather complex parser for an internal DSL. It was very difficult to incrementally change the old parser, so we started to build a new one aside. It would try to parse, but delegate to the old one when it failed. Little by little, the new parser was handling more and more of the grammar. When it supported all the inputs, we removed the old one.

The strangler is particularly well suited for refactoring technical components. They have more stable interfaces and can be very difficult to change incrementally.

9. Parallel Run

This is more of a trick than a long term strategy. The idea is to use the initial (legacy) version of the code as a reference for your refactoring. Run both and check that they are doing the same thing.

Parallel Railroads

Here are some variations around this idea.

If the code you want to refactor is side effect free, it should be easy to duplicate it before refactoring. This enables running both to check that they compute the same thing.

Put this in a unit test to bootstrap a test-refactor loop. You can also run both in production and log any difference. You’ll need access to production logs … Devops teams have a refactoring advantage !

Here is another use of your logs. If the code writes a lot of logs, we can use them as a reference. Capture the logs of the old version, and unit test that the refactored version prints the same logs out. That’s an unmaintainable test, but good enough to bootstrap the test-refactor loop.

The Gilded Rose kata is a good exercise to practice this last technique.

10. Dead code is better off dead

You don’t need to refactor dead code ! Again, access to production logs is a great advantage for refactoring.

Add logs to learn how the real code runs. If it’s never called, then delete it. If it’s only called with some set of values, simplify it.

No silver bullet

That was a whirlwind tour of the legacy code refactoring techniques I know. It’s no promise that refactoring will become easy or fast. I hope it is a good starting point to set up and walk a refactoring plan.

This was the last post of a series of 3 about how to learn refactoring techniques. If you didn’t already, check part 1 7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer and part 2 How to Start Learning the Tao of Incremental Code Refactoring Today.

How to Start Learning the Tao of Incremental Code Refactoring Today

In my last post, I explained why incremental refactoring techniques will make you both more productive and relaxed.

As anything worth its salt, the path to full mastery is long and requires dedication. The good news is that you’ll start to feel the benefits long before you are a master.

Dedicated Practice

The quickest thing that will get you up to speed is dedicated practice. Take some time to do some exercices outside of any ‘production’ code.

TDD Coding Katas

The most famous practice to learn TDD also works very well to learn refactoring. That shouldn’t be a surprise as incremental refactoring is an integral part of TDD.

There are many ways to do your first coding kata. You could find a coding dojo near you (ask meetup.com). Or you could find motivated colleagues to start one at your company … I wrote in more details about how to attend a coding dojo in this post.

Emily Bache's Coding Dojo book cover

You can also practice katas on your own. My friend Thomas Pierrain rehearses the same katas to discover deeper insights.

Refactoring Golf

The goal of incremental refactoring is to keep the code production ready all the time. Smaller commits is one happy consequence of that.

You can stretch your refactoring muscles by doing coding katas and keeping the code compiling all the time. You’ll need to master your IDE and its automated refactoring. Most of all, it will shift your attention from the goal to the path !

I learned at SPA conference that we call this ‘Refactoring golf’. The name comes from Golf contests, popular in the Perl community. Their goal is to write the shortest program possible to do a specific goal. The goal of a Refactoring Golf is to go from code A to code B in the fewest transformations possible.

They are a few refactoring golf repos on Github, I tried one and found it fun ! Give it a try too !

Study some theory

Real mastery does not come by practice alone. Studying theory alongside practice yields deeper insights. Theory enables to put your practice into perspective and to find ways to improve it. It saves you from getting stuck in bad habits. It also saves you from having to rediscover everything by yourself.

Develop your design taste

In Taste for Makers Paul Graham explains why taste for is fundamental to programming. Taste is what allows you to judge if code is nice or bad in a few seconds. Taste is subjective, intuitive and fast, unlike rules which are objective but slower. Expert designers use taste to pinpoint issues and good points in code on the spot.

Within the fast TDD – Refactoring loop, taste is the tool of choice to drive the design. Guess what : we can all improve our design taste !

Code smells are the first things to read about to improve your design taste. Once you know them well enough, it will be possible to spot things that might need refactoring as you code.

Spotting problems is nice, but finding solutions is better ! Design Patterns are just that … There has been a lot of controversy around Design Patterns. If overusing them leads to bloated code, using them to fix strong smells makes a lot of sense. There is even a book about the subject :

Joshua Kerievsky's Refactoring To Patterns book cover

Finally, there’s a third and most common way to improve our design taste. It’s to read code ! The more code we read, the better our brain becomes at picking small clues about what nice and what is not. It’s important to read clean code but also bad code. To read code in different languages. Code built on different frameworks. 

So, read code at work, read code in books, read code in open source libraries, good code, legacy code …

Learn your refactorings

As with most topics in programming there is a reference book about refactoring. It’s Martin Fowlers’s Refactoring, improving the design of existing code. Everything is in there, smells, unit testing and a repository of refactoring walkthroughs.

Martin Fowler's refactoring book cover

The book is said to be a difficult read, but the content is worth gold. If you have the grit, give it a try ! At the end, you should understand how your IDE does automated refactoring. You should also be able to perform all the refactorings that your IDE does not provide by hand ! This will enlarge your refactoring toolbox, and help you to drive larger refactorings from A to B.

Develop a refactoring attitude

Practice makes perfect. Whatever our refactoring skill, there is something to learn by practicing more.

Make it a challenge

As you are coding, whenever you find a refactoring to do to your code, make it a challenge to perform it in baby steps. Try to keep the code compiling and the tests green as much as possible.

When things go wrong, revert instead of pushing forward. Stop and think, try to find a different path.

If you are pairing, challenge your pair to find a safer track.

This might delay you a bit at first, but you’ll also be able to submit many times per day. You’ll see that your refactoring muscles will grow fast. You should see clear progress in only 1 or 2 weeks.

Team up against long refactorings

If your team prioritizes a user story that will need some re-design, try to agree on a refactoring plan. The idea is to find a coarse grain path that will allow you to commit and deliver many times. This plan might also help you to share the work on the story.

Having to question and explain your assumptions will speed up your learning. 

Legacy code

Refactoring is most useful with bad legacy code. Unfortunately, it also where it is the most difficult. Next week’s blog post will be about what we can do to learn how to refactor legacy code.

That was my second post in this mini-series about refactoring. First one was 7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer. The third and last is 10 things to know that will make you great at refactoring legacy code

7 Reasons Why Learning Refactoring Techniques Will Improve Your Life as a Software Engineer

This post is a bold promise. Mastering incremental refactoring techniques makes our lives as software engineers more enjoyable.

I have already made the same statement about TDD before. As refactoring is a part of TDD, one could think I am repeating myself. At the same time, a recent Microsoft blog post argued that refactoring is more important than TDD. Even though I’m a TDD fan, that’s an interesting point.

Incremental refactoring is key to make releases non-events ! As early as 2006, using XP, we were releasing mission critical software without bugs ! We would deliver a new version of our software to a horde of angry traders and go to the movies without a sweat !

What’s so special about incremental refactoring ?

Avoid the tunnel effect

A long tunnel

Mastering incremental refactoring techniques allows to break a feature down to baby steps. Not only smaller commits, but also smaller releases ! You can deploy and validate every step in production before we move to the next !

Small releases are also a lot easier to fix than big bang deployments. That alone is a good enough reason to deploy in baby steps.

There are a lot of other advantages to small deployments. Merges become straightforward. Someone can take over your work if you get sick. Finally, it’s also easier to switch to another urgent task if you need to.

Deliver early

When you know that you will be able to improve your work later on, it becomes possible to stick to what’s needed now. After spending some time working on a feature, it might turn out that you delivered enough value. Between enhancing this feature and starting another one, pick the most valuable. Don’t be able to switch. Incremental refactoring, makes it easy to resume later on if it makes sense.

Real productivity is not measured through code, but through feature value. This explains why incremental refactoring is more productive than up-front / big-bang development.

Know where you stand

As you’ll work through your feature, you’ll have to keep track of the done and remaining steps. As you go through this todo list and deliver every successive step, you get a pretty clear idea of where you stand. You’ll know that you’ve done 3 out of 7 steps for example. It helps everyone to know what’s the remaining work and when you’ll be able to work on something else.

Tangled wool

A few times, I fell in the trap of features that should have taken a few hours and that lingered for days. I remember how stupid I was feeling every morning, explaining to my colleagues that it was more complex than I had thought, but that it should be finished before tomorrow … Learning incremental refactoring techniques saved me from these situations.

Deliver unexpected feature

Incremental refactoring techniques improves the code. As a systematic team wide effort, it keeps the code healthy and evolutive. When someone requests an unexpected feature late, you’ll be able to deliver it.

This should improve your relation with product people. They will be very happy when you build their latest idea without a full redesign.

Avoids rewrites

Joel Spolsky wrote a long time ago that rewriting a large piece of software is the number 1 thing not to do ! All my experiences in rewriting systems have been painful and stressful.

It always starts very rosy. Everyone is feeling very productive with the latest tools and technologies. Unfortunately, it takes a lot of features to replace the existing system. As always with software, the time estimates for the rewrite are completely wrong. As a result, everyone starts grumbling about why this rewrite is taking so long. The fact that the legacy system is still evolving does not help either. Long story short, the greenfield project ends up cutting corners and taking technical debt pretty fast … fueling the infamous vicious circle again.

Incremental refactoring techniques offer an alternative. It enables to change and improve the architecture of the legacy system. It looks longer, but it’s always less risky. And looking back, it’s almost always faster as well !

Ease pair programming

eXtreme Programming contains a set of practices that reinforce each other. As I wrote at the beginning, refactoring goes hand in hand with TDD. Pair programming is another practice of XP.

Tangled wool

TDD and Refactoring simplify pair programming. When a pair is doing incremental refactoring, they only need to discuss and agree on the design at hand. They know that however the design needs to evolve in the long term, they’ll be able to refactor it. It’s a lot easier to pair program if you don’t have to agree on all the details of the long term design …

In turn, pair programming fosters collective code ownership. Collective code ownership increases the truck factor. Which reduces the project risks and makes the team’s productivity more stable. In the long run, this makes the work experience more sustainable and less stressful.

Simplify remote work

Refactoring will also save you from the commutes and allow you to work closer to the ones you love !

Refactoring techniques enable small commits. Small commits simplify code reviews, which are key to remote or distributed work. Even if you are doing remote pair programming, small commits help to switch the control between buddies more often.

Tangled wool

To be continued

I hope that by now I persuaded you to learn incremental refactoring techniques. My next post will dig into the details about how to do that.