How to Run Your First Improvement Kata

The improvement kata can solve problems that typical retrospectives fail to address. Although there is a halo of mystery around it, it’s actually not that difficult to get started ! Here is a guide.

During the last few weeks, I’ve been blogging about the improvement kata. You can read the full story of the first time we applied it in our team to gain 25% of productivity. If you are more interested by what it taught us, check this other post.

Illustration of the improvement kata cookbook

A 5 bullets summary

Here is how I would explain what the improvement kata is :

  • It’s a continuous improvement technique. It relies on the scientific method to reach a target state.
  • It involves running experiments to know if your ideas are valid.
  • It can be long to run through, but it works on tricky situation where retrospectives don’t.
  • It’s 100% scientific. It uses data analysis and deduction. Not gut feeling and community best practices.
  • It can be part of the backlog, as any other item. It does not have to be a special event like retrospectives usually are.

The main steps of the improvement kata

💡 The improvement kata is 100% scientific, it uses data and deduction, not gut feeling and best practices

Let’s give it a try ! 

Is that enough for you to give it a try ? If so, great, read on ! If you need a bit more convincing, check the full story of how we gained 25% of productivity with it.

Here is how to get started :

  1. The first thing is to read about it. If you have the time, Toyota Kata is a good read. If you read French, the “Petit guide de management lean à l’usage des équipes agiles” is a very pleasant and easy read. Finally if you want to cut it as short as possible, read the Toyota Kata website.
  2. Pick a topic to try it on. Best candidates are clear and important problems. They might have emerged out of a retrospective for example. The scope should be small enough not to get lost.
  3. Once you’ve identified a topic, someone or a pair should take ownership of the kata. It’s very unlikely that you’ll be able to do the full kata in one afternoon. Understanding happens when the brain is at rest, and experiments take time. The owners need to dedicate some time to follow up on the kata.
  4. Repeatedly ask yourself the coaching kata questions. This will help you and your pair to stay on track.

    1. What’s the target condition ? (Describe what are you trying to achieve)
    2. What’s the actual condition ? (Describe your current situation)
    3. What obstacles do you think are preventing you from reaching the target condition ? Which one are you addressing now ? (Describe the first problem you are about to try to fix)
    4. What is your next step ? What do you expect ? (Describe the experiment you are going to run to test a solution)
    5. When can we go and see what we have learned from taking that step ? (Run the experiment and learn from the results. Decide on a process change or repeat from an earlier step)
  5. Going through the kata in pair is a great way to spread the practice within the team. At some point you might be able to run many improvement katas in parallel ! Just make sure not to walk on each other’s toes …

Expect the first time to be a bit rocky, and to feel lost from time to time …

Start today !

The main steps of the improvement kata

Many practices and techniques seem daunting at first. Remember the first time you wrote a test before the code. The first time you tried to program using only immutable data structures. Or the time you wrote your first “hello world” program !

💡 We can learn anything things on our own by just doing them

The improvement kata is no different. Give it a go, and you’ll learn a powerful technique.

Whether you have already used the kata, you plan to use it or you have questions about it, I’d like to hear from you ! Leave a comment.

Lessons Learned From Running Our First Improvement Kata

During the past few weeks, I blogged the story of our first improvement kata.

Doing this first improvement kata taught us many lessons. We re-discovered best practices of the software industry. We also learned more general things about the improvement kata itself.

Drawing of books

Rediscovered best practices

As we went through the kata, we ‘proved’ many known best practices. We did not have to believe them, we had data explaining that they worked. For example :

We also pushed the #NoBug policy further than it is usually done. We defined a clear definition for bugs that anyone could use. Doing so, we removed the product owner (or on-site customer) from the picture. Very often, the PO is the only one who can sort stories from bugs out. We created what Donald Reinertsen calls a distributed rule in the flow book. It increased empowerment, removes bottlenecks, while ensuring alignment.

The 'Flow' book cover

The improvement kata

The first general lesson that we learned is that the improvement kata works !

At the beginning, we were very uneasy not to have perfect data. Remember how we had to resort to velocity as a proxy measure for productivity. In the end, that was not a severe problem. It still allowed us to understand what was going on.

We also learned that retrospective is not the only road to continuous improvement. In fact, the improvement kata and retrospectives are very different :

  • The time frame is different. A retrospective lasts for 1 or 2 hours and yield immediate results. An improvement kata is a background task that could take months in theory !
  • But the improvement kata also digs a lot deeper in the topic and brings true understanding. In our case, it fixed a problem that retrospectives where failing to address.
  • Ownership is also different. Retrospectives are a team activity. The improvement kata needs one or a few owners. It would be very difficult to align everyone on the same path of thoughts if we did it as a group activity.
  • Being a team activity, retrospectives have built-in alignment. The conclusions of the improvement kata must explained and agreed for a team to act on them. A good practice is to have regular (short) team meetings to share the current state of an improvement kata.
  • As the improvement kata is a more individual activity, it is more remote friendly. Team members can run the kata on their side, sharing everything through a wiki, or a blog for example.

Keep in mind that this was our first try at the kata. Some of our difficulties might disappear with a bit of practice !

What’s next ?

Hybrid continuous improvement

I clarified the differences between the improvement kata and retrospectives. That’s not the end of it. I’m sure a mixed format could be very interesting ! Start with a retrospective to collect the team’s problems, and vote on the more important. Add a corresponding improvement kata task to the backlog. Someone would then handle this improvement task, sharing with the team along the way.

This might be a great opportunity to reduce meeting time with shorter retrospectives.

💡 Reduce meeting time with mixed retrospective & improvement kata

Data science for software

Going through this improvement kata made me realize something else. It is very difficult to get quality data to investigate. We had to resort to what was available all the way.

What’s striking is that we use software tool for all our work ! These tools have logs and could record usage data somewhere. Imagine all we could learn by mining all this data ! Our IDEs, CI servers, quality monitors, test tools, version control and project management tools should store everything we do at the same place !

With all this data, could we improve our estimations ? Could we find creative ways to be more productive ? Could we estimate the speed up that fixing some technical debt would bring ?

💡 What are we waiting to apply data science to the development process ?

As the saying goes, “The cobbler’s children go barefoot”. We are building such systems for others, but not for ourselves.

Hopefully, new tools like CodeScene are emerging to fill this gap. You can learn more about CodeScene on their website, or from the founder’s book. It analyses the version control history to find hot spots and other things in your code.

The 'Code as a Crime Scene' book cover

While we keep dreaming of such tool, I’ll continue to blog. Next week, I will write a short guide of how to run your first improvement kata.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 5

This is the fifth (and last) post of a series of 5 about the improvement kata. If you haven’t read the beginning of the story, I recommend you start from part 1.

In the previous post, we decided to adjust our definition of a bug to limit the time lost on nice-to-have bug fixes.

It would take a while to know if adjusting the definition of a bug would help us or not. At the same time, we knew it would not help us to reduce the number of bugs we escaped to other teams.

A 'SUCCESS' banner in the wind

Idea 3 : More exploratory testing

We decided to push on this matter as well. This means that we would be running two PDCAs (Plan-Do-Check-Act) at the same time. This is not the improvement kata procedure by the book. That could have been an error from our side, as first time users of the kata. Or maybe it’s a pragmatic habit to adapt the kata to real life … I guess we’ll know better as we apply the kata more often. The danger is that the different experiments can conflict with each other. When measuring the results, it becomes difficult to link observations with experiments. Anyway, back to our own situation, as you’ll see, it ended up well for us.

The first thing was to know a bit more about our bugs. Checking the recently closed bugs yielded suspicions about a recent features. Analyzing further proved our gut feeling.

Curve of how bugs were fixed on the last 2 months

Curve of how bugs were fixed on last 2 months

Curve of the origin of bugs on the last 2 months

Curve of the origin of bugs on the last 2 months

Ignoring the Christmas drop at the middle of the curve, we concluded 2 things from these graphs :

  • We were leaking bugs to the product
  • Bugs mostly came from newly added features

Despite all our automated tests and regular  exploratory testing, we were leaking bugs.

We decided to do more exploratory testing for a while ! We were already doing exploratory testing at the end of every story. We added an extra 1 hour team session of exploratory testing every sprint.

Do, Check & Act

We used these new conventions for a few weeks. We did more exploratory testing, and would be more strict about what a bug was. We stuck to our prioritization : first improvements, then bugs and only after, stories.

After a few weeks of that, we were able to update our bug trend and do a linear regression on it again. Here were the results :

Curve of the origin of bugs on the last 2 months

Hurray ! As you can see, we were to be done with bugs around April 2017, which was 3 months away at that time.

💡 Quality is free, but only for those willing to pay for it ! [Tom DeMarco in Peopleware]

Cover of the 'Peopleware' book by Tom DeMarco & Timothy Lister

We confidently adopted these practices as part of our working agreements. This brought our first improvement kata to its end.

💡 The improvement kata not only brings improvements, it also teaches you why they work.

3 months later

As you know, April 2017 is long gone. I can now give you a more up to date report of the actual effects on the team’s daily work. 

First, the backlog does not contain bugs anymore. We payed the bug debt back. Second, we still discover some bugs from time to time, but a lot less than we used to. To summarize, there is now a pair of developers (25%) of the team that can work on user stories instead of fixing bugs.

As we are still fixing bugs as they appear, the 25% productivity gain claim might be an overstatement, but 20% is not. At the same time, less bugs are now escaping. This means that the whole organization is saving on interruptions and rework. 25% might not be such a bold claim after all !

This is it !

This was post 5 in a series of 5 about the improvement kata. I’m not completely done writing about this improvement kata though. In the coming weeks, I’ll post about the lessons learned and how to start your own improvement kata.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 4

This is the fourth post on a series of 5 about the improvement kata. If you haven’t read the beginning of the story, I recommend you start from part 1.

We ended the previous post with a target condition :

We’d like to be done with bugs within 3 months.

Currently, 1 pair (one quarter of the team) is constantly busy fixing bugs. If we manage to find a way to spend less time on bugs, we can expect a productivity increase of about 25%.

Anti bug insecticide can

Next step in the improvement kata is to run Plan-Do-Check-Act (PDCA) experiments. Before you run experiments, you need ideas !

Idea 1 : Stop exploratory testing

We’d like to spend less time fixing bugs. At the same time, we know that we started to spend more time on bug fixing when we started exploratory testing.

We thought that one easy way to do less bug fixing was to stop exploratory testing altogether ! We listed the pros and the cons of exploratory testing.

The obvious cons

  • It discovers bugs. Fixing these bugs costs time. We’d rather spend this time delivering new features

But the pros are many

  • It helps us discover technical debt and improve the quality of our code, which makes us faster in the long run. Clean code has less bugs. When we discover many bugs related to a particular region of the code, it means we should refactor it.
  • It speeds up integration with other teams’ work in many ways :
    • It saves other teams from debugging their time away trying to use broken things
    • Fixing bugs is sometimes just a matter of providing clear ‘not supported yet’ errors. Doing this saves a tremendous amount of time to other teams.
    • Avoids blocking other teams as they wait for your bug fixes
    • Reduces interruptions from bug reports and fixes bouncing between teams.
    • By reducing unplanned rework, it makes any commitment you do more likely

In the light of all this, stoping exploratory testing did not seem like a great idea. We had to find something else.

💡Saving time by not fixing bugs might not be a great idea

Exploration map with a red cross

Idea 2 : Review what a bug is

We needed to find a middle ground between where we were and stoping exploratory testing. We wanted :

  • not to let bugs escape
  • raise clean errors on things that are not yet supported
  • prevent scope creep from bug reports

A few months ago, when we decided to prioritize bugs before features, we had defined a definition for bugs. Here is what it looked like. Something is a bug if any one of these is true :

  1. It used to work
  2. It corrupts data
  3. It silently returns wrong results
  4. It returns an internal error (like an internal exception stack trace)

We examined this definition, discussed, and decided to dump the first clause. Our logic was that it was very difficult to know if something had already been working. The best thing we could refer to was our extensive suite of tests. If we discover that something is not working, it is very likely that it has never been. If it had been working, it means there was a hole in our test suite.

💡A hole in a test suite is a bug to fix

We decided to test this new bug definition for a while.

Not there yet !

We’re getting closer to the end, but you’re not done with PDCA yet.

This was post 4 in a series of 5 about the improvement kata. In the next and last post of this story you’ll learn how we ended the PDCAs for great results.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 3

This is the third post on a series of 5 about the improvement kata. If you haven’t read the beginning of the story, I recommend you start from part 1.

In the previous post, I explained how we started to understand what was going on. We were now questioning our way of handling bugs.

Are we spending too much time on bugs ?

Bugs drawn on top of code

More understanding

Types of tasks

To answer this question, we decided to plot the different types of tasks we had completed per sprint.

Bar chart with the types of tasks over sprints

Think again of the velocity curve we started with. We see an almost exact correlation between story count (green bars above) and story points (blue curve below).

💡#NoEstimates works

Velocity graph

We can also see that after sprint 56, we were spending more time on bugs and improvements. Improvements are supposed to improve productivity, so we decided to focus on bugs first. Here is what we get if we compute the average number of bugs per sprint :

Periods Sprints Bugs Average bugs fixed per sprint
2015, Before sprint 56 15 21 1.4
After sprint 56 34 210 6.1

Starting sprint 56, we were fixing 4 times as many bugs as we used to do before !

What is going on with bugs ?

At this point, we felt we’d made a great step forward in our understanding. We almost thought we were done with it …

After a bit of thinking though, it was clear that we were not ! We still needed to understand why we were in this situation.

We started by listing more questions :

  • Could it be that we just got a lot better at testing ? Since sprint 56, we had been doing regular exploratory testing. Exploratory testing sessions were very effective at finding bugs.
  • Were we paying back a bug debt ? The created versus resolved trend seemed to show so. But it could also be that we weren’t testing as well as we used to !

Created vs Resolved Bugs graph

  • If we were paying back a bug debt, how close were we to the end of the payback ?
  • Were we creating too many flaws in the software ?
  • Are we fixing too many bugs ? If so, what should we do to fix less ?
  • Are the bugs coming from other teams using our component or from our own testing ?
  • Are bugs on new or old code ?

A lot of questions, all difficult to answer. We decided to first see if we were paying back a bug debt. If this was the case, most other questions would become more or less irrelevant. With a bit of thinking, we came up with a measure to get the answer.

Are we paying back a bug debt ?

We first started to do exploratory testing at sprint 56. To do this, we would run a 1 hour session, where the pair finding the more bugs would win fruits. (Later on, we streamlined exploratory testing as part of the workflow for every story) At that time, we used to find more than 10 bugs in 1 hour.

💡Gamification transforms nice developers into berserk testers !

Explo Test Sesssion 61 62 63 64 66 16.01
Bugs found 16 6 16 10 11 11

We would do another such a session. If we found significantly less than 10 bugs, let’s say less than 6, it would mean that :

  • we improved the quality of our software
  • our streamlining of exploratory testing works
  • if we continue to search and fix bugs as we do currently, we’ll reach a point where we won’t find any more bugs

Otherwise, none of these stand, and we’ll have to continue our investigations.

So we did a 1 hour, fruit-powered, exploratory testing session. And we found only 3 bugs ! Indeed, we were paying back a bug debt. The question became

When should payback be over ?

A linear regression on the created vs resolved bug trend showed that we still had 15 more months to go !

Bug trend graph

Target condition

At that point, the target condition became obvious :

We’d like to be done with bugs within 3 months.

Currently, around 1 pair (25% of the team) was busy fixing bugs. If we’d manage to bring this down, we’d have a 25% productivity boost.

This was post 3 in a series of 5 about the improvement kata. Next post will be about PDCA.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 2

In my previous post, I described the productivity issue our team was facing. How retrospectives did not work, and how I started looking at the improvement kata.

We had gone through the first phase of the improvement kata : set the end goal.

Generating enough profit for the business while sticking to a sustainable pace.

Time to start the second phase : Understand.

Drawing of a question mark transforming into a light bulb

Understand

Where we really slower ? Or was it an illusion ?

When trying to understand, you have to start with the data you have. You continue digging until you get a deeper understanding of the situation.

Velocity

We started with available data : story points and velocity. For sure this is a pretty bad measure of productivity. (Note : we should never use velocity for performance appraisal) In our case though, it felt like a good starting proxy measure.

Here is our velocity curve over 2 years.

Velocity graph

It’s obvious that something changed. There are 2 parts to this graph. The velocity dropped between sprint 54 and 16.01. That’s a clue that our gut feeling was not completely wrong. Our productivity did change.

Man days

Our first hypothesis was that team members turnover was the cause. As with any team, some people came, and some people left. Let’s superpose the man days and velocity curves.

Velocity vs Manpower graph

That could only explain part of the problem !

We tried to fine tune the man days curve. We took people’s involvement in tasks outside of programming into account. We used coefficients depending on the developers’ experience. That did not provide a better explanation.

We had to find another explanation.

Velocity computation

As I said earlier, velocity is not a measure of productivity. Any change in the way we were computing velocity would impact this curve.

We had kept photos and Trello boards of our retrospectives meetings. We searched them through for anything that could impact velocity. Here is what we found :

  • At sprint 55, we decided to ditch the focus-factor
  • At sprint 61, we started to do regular exploratory-testing. Exploratory testing discovers more bugs on top of user reported bugs. This made us spend more time on fixing bugs.
  • At sprint 62, as we opted for a No Bug policy we decided not to count story points for bugs

💡Keep Photos and Trello boards of Retrospectives as a log of your working agreements changes

The timings almost perfectly matched what we had observed in the first place. The question that came to our minds was :

Are we spending too much time on bugs ?

Halfway through understanding

This is how we started to dig into our situation. It’s a good time to give you a bit of feedback about how we felt at that point.

It was the first time we tried the improvement kata. More than that, we did not find any tutorial or guides about how to run it. The only instructions we had were theoretical descriptions or super concrete examples. We had to bridge the gap and come up with our own way.

To summarize, we felt a bit lost, we had gathered data from here and there, and we did not know what to look at next. On top of that, the quality of the data we were collecting was not great. We were wondering if we would get anything out of these investigations.

The cover of the book 'The First 20 Hours'

It felt a bit like when I did the 20 hours experiment to learn anything. We did exactly what had worked with the learning experiment : we pushed through !

💡If you feel lost when doing something for the first time. Push through !

In next week’s post, I’ll continue to detail the ‘understand’ phase. The series also gained an extra post, and will now be 5 posts long.

More to read next week.

How We Used the Improvement Kata to Gain 25% of Productivity - Part 1

If you are serious about continuous improvement, you should learn the improvement kata.

Retrospectives are great to pick all the low hanging improvements. Once you’ve caught up with the industry’s best practices, retrospectives risk drying up. Untapped improvement opportunities likely still exist in your specific context. The improvement kata can find those.

Low and high hangling fruits on a tree

Here is how we applied the improvement kata to gain 25% of productivity in my previous team.

The Situation

Thanks to repeated retrospectives, the team had been improving for 2 years. Retrospectives seemed like a silver bullet. We would discuss the current problems, grasp an underlying cause and pick a best practice. Most of the time, that fixed the problem.

Sometimes it did not work though. Even if the issue came back in a later retrospective, it would not survive a second scrutiny. In the previous two years, the team had transformed the way it worked. It adopted TDD, incremental refactoring, pair programming, remote work, automated performance testing and many others.

Lately though, things did not work so well. The team was struggling with productivity issues. The team was not slowing down, but the scope and complexity of the product had grown. Features were not getting out of the door as fast as they used to. We had the habit of prioritizing improvements and bug fixes over features. That used to improve the flow enough to get more and more feature done. It did not seem to work anymore.

We tried to tackle the issue in retrospectives. We would change the way we prioritized features … To be later bogged down by bugs, technical debt or bad tools. We would discuss that in retrospective, and decide to change the priorities again … The loop went on and on a few times.

We were getting nowhere.

The improvement kata 

That’s why I started to look for other ways to do continuous improvement. I stumbled upon a book called Small Lean Management Guide for Agile Teams. The book is in french, but I wrote an english review. I fell in love with the way the authors dug into the hard data of how they worked to understand and fix their problems.

To learn more about this technique, I read Toyota Kata. It details two management tools used at Toyota : the improvement and coaching katas. Some say these are Toyota’s special weapon. The thing that grew them from a small shop to the largest car manufacturer in the world.

They are katas because they are routines. They must be re-execute many times. The improvement kata should improve the flow of work. The coaching kata helps someone (or yourself) to learn the improvement kata. Every time we go through the kata, we also understand it better.

Here is how the improvement kata goes :

  1. Describe your end goal
  2. Understand where you stand about this goal by measuring facts and data
  3. Based on your end goal and the current situation, define where you’d like to be in 3 months or less
  4. Use Plan-Do-Check-Act to experiment your way to this new situation
    1. Plan an experiment
    2. Do this experiment
    3. Check the results of this experiment
    4. Act on these results. 
      • Either drop the experiment and plan a new one (go back to ‘Plan’).
      • Or spread the change at a larger scale.

The Improvement Kata Pattern

Image from Mike Rother on Toyota Kata Website

The coaching kata is a way to coach someone into applying the improvement kata. The fun thing is that you can coach yourself ! The idea is to ask questions to the coachee to remind him of where he stands in his improvement kata.

The Coaching Kata Questions

You’ll find tons of details and material about these katas on the Toyota Kata website.

Our end goal

That’s how I started to apply the improvement kata in my team. I already had an idea of our end goal : to be more productive. To be more precise :

Generating enough profit for the business while sticking to a sustainable pace.

Retrospectives could not get us there. Would the improvement kata would succeed ?

This is the first part of a series of 4 posts relating our first use of the improvement kata. In the next post, I’ll explain what we did to understand the current situation.

Throwing Code Away Frequently

Here is the main feedback I got about my previous post eXtreme eXtreme Programming.

What do you actually mean by throwing code away ? Does it mean stoping unit testing and refactoring ?

A drawing of a shredder destroying some code

So I guess it deserves a bit of explanation.

What is it all about ?

When programming, I don’t throw code away a lot. I tend to rely on my automated tests to change the code I already have. That might be a problem.

As with everything, there is no one size fits all. We should choose the best practice for our current situation. Same thing applies for incremental changes versus rewriting.

TDD makes incremental changes cost effective, and as such, is a key to get out of the Waterfall.

The idea of throwing code away frequently is to make rewriting cost effective, so we can do it more often.

Why would we want to do so ?

In “When Understanding means Rewriting”, Jeff Atwood explains that reading code can be more difficult than writing it. There is a point where rewriting is going to be cheaper than changing.

The more unit test you have, the later you reach that point. The more technical debt you take, and the sooner. The bigger the part to rewrite, the more risky it becomes.

Let’s imagine you knew a safe way to rewrite the module you are working on. You could be faster by taking more technical debt and writing less unit tests ! Mike Cavaliere framed it as
F**k Going Green: Throw Away Your Code.

This would be great for new features, that might be removed if they don’t bring any value. It would also be a good way to get rid of technical debt. Naresh Jain also makes the point that without tests, you’ll have to keep things simple (here and here) !

Wait a minute, isn’t that cowboy coding ?

How to make it work

A graph with all the practices from my previous article eXtreme eXtreme Programming

How to ensure quality without unit tests ?

TDD and unit testing is a cornerstone of XP. If we remove it, we need something else to build quality in. Could Mob Programming and Remote teams do the trick ?

“Given enough eyeballs, all bugs are shallow”. Pair programming and code reviews catch a lot more bugs than solo programming. Only a few bugs are likely to pass through the scrutiny of the whole team doing mob programming.

What about Remote ? Martin Fowler explains that remote teams perform better by hiring the best. Skills of programmers have been known for a long time as a main driver of software quality.

People vs methodology impact on productivity

Photo from Steve McConnell on Construx

Finally, the Cucumber team reported that Mob Programming works well for remote teams.

How to make this safer ?

Even with the best team, mistakes will happen. How can we avoid pushing a rewrite that crashes the whole system ?

The main answer to that is good continuous deployment. We should deploy to a subset of users first. We should be able to rollback straight away if things don’t work as expected.

As the system grows, microservices can keep continuous deployment under control. We can deploy, update and roll them back independently. By nature, microservices also reduce the scope of the rewrite. That alone, as we said earlier, makes rewrites safer.

As a last point, some technologies make building microservice systems easier and incremental. The Erlang VM, for example, with first class actors, is one these. Time to give Erlang and Elixir a try !

Is this always a good idea ?

There are good and bad situations.

For example, a lean startup or data driven environment seems a good fit. Suppose your team measures the adoption of new features before deciding to keep or ditch them. You’d better not invest in unit tests straight away.

On the other side, software for a complex domain will be very difficult to rewrite flawlessly. I have worked in the finance industry for some years, I know what a complex domain is. I doubt I could rewrite a piece of intricate finance logic without bugs. I would stick to DDD and unit testing in these areas.

How to dip your toe

Here is how I intend to dip my toe. I won’t drop automated testing completely yet. What I could do instead (and that I already did on side projects) is to start with end to end tests only.

From there every time I want to change a piece of code, I have many options :

  • Add a new end to end test and change the code.
  • If the code is too much of a mess, I can rewrite it from scratch. I’ll have the safety of the end to end tests.
  • If I see that the module is stabilizing, has not been rewritten for a while, and yields well to modifications. I could start splitting end to end tests into unit tests, and embark on the TDD road.

Maybe later, when I have a team doing mob programming to find my mistakes, we’ll skip the end to end tests.

Other interesting links

eXtreme eXtreme Programming (2017)

What would XP look like if it was invented today ?

A few days ago, I stumbled upon these two talks that got me thinking about this question.

So I looked deeper into the question, while sticking to the same values and principles. Here is what I came up with.

Practices

Continuous Deployment

Why should we only deliver at every iteration in 2017 ? Lot’s of companies demonstrated how to deploy every commit to safely production. Amazon for example, deploys new software every second !

DevOps++

As a team starts doing continuous deployment, devs get more and more involved in ops. This privileged view on the users’s behaviour can turn devs into product experts. Why not push the logic further and make them the product guys as well ?

Test in prod

Continuous deployement opens up many opportunities. Deploying safely requires bulletproof rollbacks. Once devs take care of product, code and ops they can shortcut testing and directly test in prod with some users ! Rollback remains an option at any time.

#NoBugs

That seems like wishful thinking. The idea is to fix bugs as soon as they appear, but also to prevent them in the first place. Continuous Deployment requires great engineering practices, which enable this way of working. A story cannot be “finished” until test in prod is over, and bugs fixed.

Kanban

At its core, Kanban is a way to limit the work in progress. It’s almost a side effect of the previous practices. #noBugs almost kills interruptions for rework. On top of that, devs have full control on their end to end work, so why would they multitask ?

#NoBacklog

In Getting Real, basecamp guys said that their default answer to any new feature request was “No”. Only after the same thing got asked many times would they start thinking of it. Maintaining a long backlog is a waste of time. Dumping backlog items but the current ones saves time. Important ideas will always come up again later.

#NoEstimates

This one is famous already. Some teams have saved time by using story count instead of story points. What’s the point anyway if the team is already :

  • working as best as it can
  • on the most important thing
  • and deploying as soon as possible

Data Driven

This is how to drive the system evolution. Instead of relying on projects, backlogs and predictions, use data. Data from user logs and interviews proves if a new feature is worth building or dumping. Exploring logs of internal tools can help to continuous improve.

Lean startup & Hackathons

Incremental improvements, in either product or organization, is not enough. As Tim Hardford explained in Adapt, survival requires testing completely new ideas. Lean startup & hackathons experiments can do just that.

The cover of the Adapt book

Improvement kata

Improvement kata is a way to drive long term continuous improvement. It’s the main tool for that at Toyota (read Toyota Kata to learn more). It provides more time to think of the problem than a retrospective. It also fits perfectly in a data driven environment.

Mob programming

Pair programming is great for code quality and knowledge sharing. Mob programming is the more extreme version of pairing where all the team codes together.

Throw code away frequently

An alternative to refactoring with unit tests is throwaway and rewrite once it gets too bad. Companies have been doing that for years. I worked at a bank that used to throwaway and rewrite individual apps that were part of a larger system. It can be a huge waste of money if these sub systems are too large. Scaling down to individual classes or microservices could make this cost effective.

Remote

With access to a wider pool of talents, remote teams usually perform better. It also makes everybody’s life easier. Finally, teams have reported that mob & remote programming work great together.

Afterthought

What’s striking from this list is that it’s not that far from the original XP ! For example, continuous deployment and generalists have always been part of it. Another point is that is not science fiction ! I found many teams reporting success with these practices on the internet ! A well-oiled XP team might very well get there through continuous improvement.

The more I look at it, the more XP stands as a unique lighthouse in the foggy Agile landscape.

As for me, I’m not sure I’d dare to swap TDD & refactoring for throwaway & rewrite. I’m particularly worried about the complex domain specific heart of systems. Nothing prevents from using both strategies for different modules though.

I should give it a try on a side project with a microservice friendly language, such as Erlang or Elixir.

5 Remote Energizer Tips That Will Make Your Remote Retrospectives Rock

Do you remember how people who are not used to the phone tend to shout in it, for the message to get far ? Read on and I’ll explain how this silly habit will make your remote retrospectives great !

A typical retrospective starts with an energizing activity, or energizer. It’s important for two reasons. First, people who don’t speak during the first 5 minutes of a meeting are more likely to remain silent until the end. Second, getting everyone to do an energizing and fun activity sets the tone for a peaceful and creative time.

Remote control + magnifying glass

Our experiences with remote energizers

When we started to do retrospectives at work, all the team was co located in Paris. There are tons of activities available on the internet to run effective energizers. We could do games like Fizz Buzz, or drawing based activities like face drawing and visual phone. It was easy and fun.

A few years ago, Ahmad Atwi joined our team from Beirut. Our catalog of energizer shrank to a few activities that we could run remotely. On top of that, going through the remote medium made it more challenging for energizers to … energize ! With time and trial, we managed to understand what works and how to pick the right energizer for a remote team.

Principles for remote energizers

We have an Agile Special Interest Group at Murex, where volunteers meet to share things they find interesting. A few weeks ago, during one of these sessions, we discussed remote energizers in a Lean Coffee.

Here are the points we came up with.

The Agile Retrospectives, making good teams great book cover

  • If they are enough teammates at every place, energizers that play in small groups will work well. For example, it would be easy to organize a balloon battle or a back to back.

The Agile Retrospectives, making good teams great book cover

  • It’s also easy to use variations on an activity that proved effective. For example, explore new kinds of questions. It’s even ok to repeat verbatim an activity from time to time.
  • Replace energizing by team building. Team building is particularly important for remote teams. Instead of engaging activities, it’s ok to have everyone share a personal anecdote. By knowing each other better, the team can build trust. For example, you could introduce such an activity with : “What book would you bring on a desert island ? Why ?”
  • One last thing we came up with my colleague Morgan Kobeissi to energize a remote meeting is to YELL. The idea is to ask everyone to answer a question while standing and yelling. A question could be “How long have you been working and what companies did you work for ?”

Someone yelling in a kid's 'can-phone'

Remote work is here to stay. More and more teams are facing similar difficulties. We need to invent new work practices. If you discovered new ways to run remote energizer, please share them with a comment.